Quantcast
Channel: SCN : Blog List - SAP BW Powered by SAP HANA
Viewing all 130 articles
Browse latest View live

Data Loading to BW From ECC using ODP - SAP HANA Information Views

$
0
0

ECC on HANA is Source system and BW on HANA is Target system.


Analytic View (ms/ANV) is in Source ECC on HANA system.

Requirement is this Analytic view output data has to be loaded to DSO of BW system.

temp.PNG

As per the classical approach, the data would be loaded into BW PSA via DB Connect / Data Provisioning options. Then DSO loading from PSA.

If PSA data is not deleted on a regular basis, It will increase the cost of data storage, the downtime for maintenance tasks and performance of the data load.


I have done this requirement using ODP - SAP HANA Information Views approach.

In this ODP approach, Data is directly stored into Target DSO from ECC HANA Information Views using DTP.

It is making PSA Storage and Info Package execution are obsolete 


The first step in the data load, ODP - SAP HANA Information Views Connection (RFC ABAP Connection) need to be created in BW system.

In Connection Parameter, the Target as ECC system Host name has to be mentioned.

Once connection Tests are successful, New RFC Connection created under  ODP - SAP HANA Information Views Folder.


temp.PNG


The ODP - SAP HANA Information Views Connection (RFC ABAP Connection) between the Source and Target system is,

temp.PNG

Data source created in BW System. In the pop-up, ODP Context would be displayed as HANA Information Views.

In Operational Data Provider Text Box, The HANA Information View Name (Analytic View / Calculation View) name has to be mentioned.


temp.PNG


The help pop-up show the list of Information views available in the Source ECC HANA System.

From the list, appropriate Information view has to be selected.

Once Data Source Creation is done, no need to create Info Package.

temp.PNG


BW Target would be either Classic DSO / Advanced DSO.

Target DSO/ADSO created and Transformation mapping done from ODP Data Source to DSO.

 

In DTP, ODP - SAP HANA Information Views as Source, Delta Extraction mode won't be supported. So Extraction mode changed to Full.

Under Parameters of the Data source,

Data Extraction selected as "Directly from Source system. PSA not used (for small amount of data)"


temp.PNG


Ignore the Warning Pop-up "Last change is incompatible with SAP HANA Processing".

(DTP SAP HANA Execution Processing mode will be disabled)

temp.PNG


DTP Adapter changed to "Extraction from SAP Source system by Operational Data Provisioning"

temp.PNG


Activate the DTP and Execute the DTP execution. After DTP execution completed, DSO data has to be activated.

DSO Output is,

 

temp.PNG

 

Data is getting loaded into BW DSO successfully from Source SAP HANA Information Views without PSA Storage and Info package execution.

 

Your thoughts and comments are welcomed.

 

Regards,

Muthuram





Advanced DSO Models - Part 2

$
0
0

Hi All,


Previous Blog discussed about ADSO how to act as Standard DSO

SAP BW on HANA Advanced DSO - Part 1


now will disucss ADSO how to act as INFO CUBE.

Advanced DSO have persistent object is combination of below objects

  1. 1. Field based structure of  PSA ( filed based modeling)
  2. 2. No activation required for like WDSO (update property)
  3. 3. 3 table have (active, new, change log table ) like standard DSO (over write property)
  4. 4. Info cube model – select setting – all char are keys ( addition property)

 

Advanced DSO 2nd Model

Previous blog discussed about ADSO first model like standard DSO

Now will discuss about ADSO like act has Info cube

Go to the BW Modelling tools in HANA Studio -> select info area -> click on the Advanced DSO

1.png

Select the option All characteristics are key, reporting on union of inbound and active table

If you select above option ADSO like act as Info cube

After insert the info objects into ADSO like below

2.png

Here one important thing Keys option disable means all the objet like act as Keys we cannot define particular object as keys

3.png

Under setting options below options is available

We can do partitions and indexes on ADSO

Transformation mapping form data source to ADSO with only one option SUMMATION

Means it will like act as Info cube

4.png

Under Rule type we can do read records form the ADSO Model – Active data table

6.png

While designing the ADSO all the tables will generate but change log table not in used

7.png

Sample PSA data  with 3 customers

8.png

After loading Active data table records

11.png

After 2nd loading values is addition for ZIA_AMT amount key figure.

10.png


Next Blog will discuss about ADSO how to act as Write optimize DSO.


Thanks,

Phani.

Update – Data LifeCycle Management for BW-on-HANA

$
0
0

Data managed by the SAP BW application can be classified into 3 basic categories:

Hot– high-frequent access (planning, reporting, advanced analytics, data manipulations)

Warm– less frequent access, relaxed performance SLAs (usually batch jobs), simple access patterns

Cold– archive, data lakes

 

For the “warm” data in your BW system SAP HANA offers currently 2 options: the non-active data concept (high-priority RAM displacement) and HANA dynamic tiering (using an ExtendedStorage server). We have evaluated the situation anew and have seen that with the advancements in hardware technology and in SAP HANA we do now have an even simpler and more intriguing option. Instead of introducing a new ExtendedStorage server into your HANA cluster to store the “warm” data you can use standard HANA nodes but with a different sizing formula and RAM/CPU ratio instead.

You basically run an “asymmetric” HANA scale-out landscape: a group of nodes (for your “hot” data) with standard sizing and another group with a “relaxed sizing” (you basically store more of the “warm” data on these nodes than RAM is available – the “extension” group). This allows you to run a HANA scale-out landscape with fewer nodes and less overall RAM but with the same data footprint.

 

Such a setup can be significantly easier to set up and administrate and it offers, right out of the box, all features of HANA with respect to operations, updates, and data management. The differentiation of data into “hot” and “warm” can be done easily via the BW application and using the standard HANA techniques to re-locate the data between nodes.

 

We are currently preparing the boundary conditions for these setups and are in intensive discussion with our hardware partners to enable attractive offers. The goal is to start a beta program in mid-2016. Please stay tuned for more details to follow very soon.

 

Please note that this setup is currently only planned for BW-on-HANA since it heavily relies on BWs partitioning, pruning and control of the access patterns. For applications looking for a more generic “warm” data concept, the HANA dynamic tiering feature is still a valid option. HANA dynamic tiering continues to be supported for BW as well.

AsymmetricLandscape_w_ExtensionGroup.png

#BWonHANA at #SAPPHIRENOW 2016

$
0
0

SAPPHIRENOW2016-showfloor
SAPPHIRE NOW Showfloor

On May 17-19, SAPPHIRE NOW and ASUG 2016 took place in Orlando. SAP Business Warehouse (BW) typically doesn't receive that much attention in such events as it has been in the market for some time. Still, this time, it has received quite some attention with (1) a large number of customer presentations in the context of ASUG 2016 and (2) a surprisingly prominent role in Hasso's keynote. While I cannot provide an exhaustive coverage, here are a few selected highlights that I managed to capture.

 

Dolby Laboratories Inc. (Session DE34260)

Dolby presented their move to BW-on-HANA in a 30 min session on Tuesday (17 May). Interesting to me was that they evaluated BW-on-HANA against a number of competitive alternatives as can be seen in figures 1 and 2.

Dolby-2
Fig. 1: Dolby's evaluation of competitive alternatives.

Dolby-3
Fig 2: Conclusions from the evaluation.

 

John Hopkins (Session A4690)

John Hopkins talked about their experience migrating SAP Business Suite and BW from Oracle to HANA. Interestingly, the go-live for their BW-on-HANA system was scheduled for the very Friday (20 May) of that week. Still, they had the time to attend SAPPHIRE. Great. The go-live was successful. They offered some statistics on their migration. They did not use DMO but a standard (export - import) migration. Still, the result looks great. See figure 3. John Hopkins' session material can be found here.

John-Hopkins-3
Fig 3: John Hopkins' migration stats.

 

Bell Helicopters (Session A4128)

Bell's initial motivition in their Oracle-based EDW landscape (1 BW + 4 native Oracle DWs) was - amongst other things - to leverage BW-on-HANA's openness regarding SQL access to better support also non-SAP analytics frontends like Qlik. They learned about HANA's calc view capabilities, liked the Eclipse based modeling environment for BW and HANA and also used BO universes to tap into the newly created SQL consumable models (figure 4). That led to the plan to consider HANA (and BW-on-HANA) as the consolidation environment for their entire EDW landscape (figure 5). Bell's presentation material can be found here.

Bell-Helicopters-03
Fig 4: BW-on-HANA experience by Bell Helicopters.

Bell-Helicopters-05
Fig 5: Plan to consolidate Bell's EDW landscape via BW-on-HANA.

 

Hasso's Keynote

Around 10'20" into his keynote, Hasso showed the slide displayed in figure 6. It describes his vision of the SAP systems future with BW featuring prominently. A few minutes later, he elaborated on the future of BW, his ideas for that and indicated initiatives in that direction.

Hasso-Keynote
Fig 6: Vision of the SAP systems future.

 

This blog is also available here. You can follow me on Twitter via @tfxz.

Overview SAP Business Warehouse

$
0
0

Upgrade option avialable for target SAP BW 7.5

$
0
0

Hi All,

 

When planning for BW upgrade with target version BW 7.5 with Database as Oracle and operating system as windows from Source BW release 7.0x NUC.

 

We have below approach to reach the target.

 

Capture.JPG

 

Hope this help for Customer planning  BW upgrade with Oracle as there Database

 

Thanks,

Avinash

Implementation SAP-NLS Solution with SAP IQ

$
0
0

Hi Friends,

 

This is my first blog in SCN.I am much interested to share my thoughts and knowledge.

Recently we have implemented NLS Solution for one of our Major Client.I am going to share the Experience.

 

OS Platform:SUSE Linux

Database :Sybase IQ Support Pack-08,Patch Level-10

 

What is meant by SYBASE IQ Server?

               SAP® Sybase® IQ is a high-performance decision-support server designed specifically for mission-critical business intelligence, analytics, and data warehousing. Component Integration Services within SAP Sybase IQ provide direct access to relational and non-relational databases on mainframe, UNIX, or Windows servers.

 

Since we have implemented on Linux,I am concentrating on Linux:

Pre-installation Tasks:

  • Check for Operating System Patches.
  • Increase the Swap Space.
  • Install Required Support Packages and Libraries.
  • Disable HugePages and Transparent HugePages.
  • Set the Kernel Parameters.

Swap Space:

1.PNG

Asynchronous I/O (AIO) Kernal Support

  • Asynchronous input/output (AIO) applications which use the native AIO interfaces require the libaio package. AIO provides an interface that submits multiple I/O operations and other processing requests in a single system call, and an interface that collects completed I/O operations associated with a given completion group.

2.PNG

Disable HugePages and Transparent Huge Pages

  • Allocating large amounts of memory to HugePages can significantly degrade SAP IQ performance. If the kernel allocates more than just a few MB of memory to HugePages, remove the HugePages options

3.PNG

Set the Kernel Parameters

  • Set the number of available semaphore identifiers
  • SEMMSL – maximum number of semaphores per set.
  • SEMMNS – maximum number of semaphores system–wide.
  • SEMOPM – maximum number of operations allowed for one semop call.
  • SEMMNI – maximum number of semaphore identifiers (sets).

 

Implementation SAP-NLS Solution with SAP IQ

 

Preinstallation Tasks:

  • Check for Operating System Patches.
  • Increase the Swap Space.
  • Install Required Support Packages and Libraries.
  • Disable HugePages and Transparent HugePages.
  • Set the Kernel Parameters.

Supported Server Platforms

  • SAP IQ is compatible with these platforms and operating systems in Linux:
  • Red Hat Enterprise Linux 6.2 (and later) Linux on POWER; 64-bit
  • Red Hat Enterprise Linux 6.2 (and later) x86-64, Advanced Server and Workstation Editions
  • Red Hat Enterprise Linux 5.5 (and later) Linux on POWER; 64-bit
  • Red Hat Enterprise Linux 5.5 (and later) x86-64, Advanced Server and Workstation Editions
  • SuSE Linux Enterprise Server SLES 11/Linux on POWER – minimum patch level SP1
  • SuSE Linux Enterprise Server SLES 11/X86_64 64-bit – minimum patch level SP1

 

Preinstallation Tasks

 

Swap Space:

  • The recommended minimum swap space is at least 1GB. Certain operations may cause the memory usage to grow dynamically. Changing the way SAP IQ uses buffers can dramatically change the way it uses memory.

1.PNG

Asynchronous I/O (AIO) Kernal Support

  • Asynchronous input/output (AIO) applications which use the native AIO interfaces require the libaio package. AIO provides an interface that submits multiple I/O operations and other processing requests in a single system call, and an interface that collects completed I/O operations associated with a given completion group.

2.PNG

Disable HugePages and Transparent Huge Pages

  • Allocating large amounts of memory to HugePages can significantly degrade SAP IQ performance. If the kernel allocates more than just a few MB of memory to HugePages, remove the HugePages options

3.PNG

 

Set the Kernel Parameters

Set the number of available semaphore identifiers.

  • SEMMSL – maximum number of semaphores per set.
  • SEMMNS – maximum number of semaphores system–wide.
  • SEMOPM – maximum number of operations allowed for one semop call.
  • SEMMNI – maximum number of semaphore identifiers (sets).

4.PNG

 

Check for Operating System Patches

5.PNG

 

 

INSTALLATION

-->Download Required Sybase Software from SMP.(We have used Sybase IQ SP 8 PL 10)

-->Transfer the file to your server.

-->Go to the path Where you transferred sybase software

-->Start installation ./setup.bin.

 

11.PNG12.PNG13.PNG14.PNG15.PNG116.PNG17.PNG18.PNG19.PNG20.PNG21.PNG22.PNG23.PNG

24.PNG

 

 

30.PNG31.PNG32.PNG33.PNG34.PNG35.PNG

We will see the rest of configuration and registration of SYBASE IQ System with BW on HANA Server in Blog 2.

 

Thanks,

Arunkumar.S

Lessons learnt: BW Delta DataMart from an aDSO using ODP Framework

$
0
0

Purpose:

The purpose of this is to highlight some of the issues and resolutions while setting up a Delta DataMart scenario of loading data from an aDSO using the ODP framework.

 

Scenario:

  • Setup DataMart BW connection using ODP framework (i.e. Connect from local BW system to Global BW system)
  • Setup delta DataMart from aDSO to Global BW using ODP Framework

 

Landscape:

Source and target systems are BW 7.4 SP13

 

 

Issues and resolutions:

System Connectivity Issues:

  • Destination XYZ : ping waiting time (5 Seconds) exceeded

ping waiting time.jpg

  • Resolution
    • Depending on your release, please review notes 2155845,2203314 and 2199075
    • NB: Please note that for note 2203314, you may need to configure the ‘<technical name of ODP source system>_DIALOG’ with the background user details

 

ODP Issue:

  • Issue Overview: Unable to load delta directly from aDSO using ODP Framework

 

Delta Process in the Datasource definition in theTarget BW is set to "Delta only using full upload (DSO or InfoPackage selection)":

Issue_datasource.jpg

DTP:

The only Delta option is ‘From PSA’

Issue_dtp.jpg

  • Resolution:
    • Apply SAP Note 2334387 (And dependant notes)
    • ‘Replicate metadata’ for affected Datasource
    • Review the updated Delta process in the definition of the Datasource:

resolution_datasource.jpg

  • The Delta option to extract directly from source is now available. The DTP can also be enabled for RDA data transfer

resolution_dtp.jpg


HANA based BW Transformation - SAP Notes

$
0
0

A1      SAP Notes

This blog provides an overview of the most important SAP notes regarding the topic BW transformation and  HANA processing mode. This blog is part of the blog series  HANA based BW Transformation.

 

A1.0 General Notes

2057542 - Recommendation: Usage of HANA-based Transformations

2230080 - Consulting: DTP: Out of memory situation during 'SAP HANA Execution' and the 'Request by Request' Extraction

 

A1.1      BW 7.40

2067912 - SAP HANA transformations and analysis processes: SAP Notes for SAP NetWeaver 740 with Support Package 8 or higher

2152643 - SAP HANA Processing: SAP HANA Processing: Sorting of records after call of expert script - Manual Activities

2222084 - DTP: Out of memory situation during 'SAP HANA Execution' and the 'Request by Request' Extraction

2254397 - SAP HANA Processing: BW 7.40 SP8 - SP14: HANA Analysis Processes and HANA Transformations (Part 19)

2299940 - SAP HANA Processing: BW 7.40 SP8 - SP15: HANA Analysis Processes and HANA Transformations (Part 20)

A1.2      BW 7.50

2192329 - SAP HANA Processing: BW 7.50 SP00 HANA Analysis Processes and HANA Transformations

2220753 - SAP HANA Processing: BW 7.50 SP00 - SP01: HANA Analysis Processes and HANA Transformations

2262474 - SAP HANA processing: BW 7.50 with SP00 - SP02: SAP HANA analysis processes and SAP HANA transformations

2281480 - SAP HANA processing: BW 7.50 with SP00 - SP03: SAP HANA analysis processes and SAP HANA transformations

2303781 - SAP HANA processing: BW 7.50 with SP00 - SP03: SAP HANA analysis processes and SAP HANA transformations (II)

HANA based Transformation (deep dive)

$
0
0

2      HANA based Transformation (deep dive)


This blog is part of the blog series HANA based BW Transformation.


Now I will look a little bit behind the curtain and provide some technical background details about SAP HANA Transformations. The information provided here serves only for a better understanding of BW transformations which are pushed down.

 

As part of the analysis of HANA executed BW transformations we need to distinguish between simple (non-stacked) and stacked data flows. A simple, non-stacked data flow connects two persistent objects with no InfoSource in between, only one BW transformation is involved. We use the term stacked data flow for a data flow with more than one BW transformation and at least one InfoSource in between.

 

Stability of the generated runtime objects

All information I provide here are background information to get a better understanding on SAP HANA executed BW transformation.

It is important to keep in mind that all object definition could be changed!

Do not implement any stuff based on the generated objects!

The

  • structure of a CalculationScenario (view names, number of views, …) or
  • generated SQL statements or
  • PLACEHOLDER
could be changed by the next release, support package or a  SAP note.

 

2.1      Simple data flow (Non-Stacked Data Flow)

 

A simple data flow is a data flow which connects two persistent BW objects with no InfoSource in between. The corresponding Data Transfer Process (DTP) processes only one BW transformation.

 

In case of a non-stacked data flow the DTP reuses the SAP HANA Transformation (SAP HANA Analysis Process) of the BW transformation, see Figure 2.1.

 

Figure_2_1.png

Figure 2.1: Non-Stacked Transformation


2.2      Stacked Data Flow

 

A stacked data flow connects two persistent data objects with at least one InfoSource in between. Therefore a stacked data flow contains at least two BW transformations. The corresponding Data Transfer Process (DTP) processes all involved BW transformations.

 

In case of a stacked data flow, the DTP cannot use the SAP HANA Transformation (SAP HANA Analysis Process) of the BW transformations. Strictly speaking, it is not possible to create a CalcScenario for a BW transformation with an InfoSource as source object. An InfoSource cannot be used as a data source in a CalculationScenario.


Figure_2_2.png

Figure 2.2: Stacked Transformation


Figure 2.2 shows a stacked data flow with two BW transformations (1) and (2) and the corresponding SAP HANA Transformations(3) and (4). There is no tab for the CalculationScenario in the SAP HANA Transformation (3) for the BW transformation (1) with an InfoSource as source object.

 

Therefore the DTP generated its own SAP HANA Transformations(6) and (7) for each BW transformation. The SAP HANA Transformations for the DTP are largely equivalent to the SAP HANA Transformation for the BW transformations (3) and (4).

 

In the sample data flow above, the SAP HANA Transformation (5) for the DTP get its own technical ID TR_5I3Y6060H25LXFS0O67VSCIF8. The technical ID is based on the technical DTP ID and the prefix DTP_ would be replaced by the prefix TR_.

 

The SAP HANA Transformation (5) and (6)is only a single object, in fact. I included it twice in the picture to illustrate that the SAP HANA Transformation (6) is based on the definition of the BW transformation (1) and will be used(5) from the DTP.

 

Figure 2.3 provides a more detailed view of the generated SAP HANA Transformation. The SAP HANA Transformation (6) is based on the definition of the BW transformation (1) and is therefore largely identical to the SAP HANA Transformation (3).(3) and (6)differs only in the source object. The SAP HANA transformation (6)uses the SAP HANA Transformation / SAP HANA Analysis Process (7) as the source object instead of the InfoSource as shown in (3). The SAP HANA Transformations (4) and (7)are also quite similar. They only differ with respect to the target object. In the SAP HANA Transformation (3), the InfoSource is used as the target object. The SAP HANA transformation (7) does not have an explicit target object. Its target object is only marked as Embedded in a Data Transfer Process. That means the SAP HANA Transformation (7) is used as data source in a SAP HANA Transformation, in our case in the SAP HANA Transformation (6).


Figure_2_3.png

Figure 2.3: Stacked Transformation (detailed)


The technical ID of the embedded SAP HANA Transformation (7) is based on the technical ID of the SAP HANA Transformation (5)and (6)of the DTP. Only the last digit 1 will be added as a counter for the level. This means that in case of a stacked data flow with more than two BW transformations, the next SAP HANA Transformation would get the additional digit 2 instead of 1 and so on.

 

Later on we will need the technical IDs to analyze a HANA based BW transformation, therefore it is helpful to understand how they are being created.


2.3      CalculationScenario


To analyze a SAP HANA based BW transformation, it is necessary to understand the primary SAP HANA runtime object, the CalculationScenario (CalcScenario). The BW Workbench shows the CalcScenario in an XML representation, see (2) in Figure 2.4. The CalculationScenario is part of the corresponding SAP HANA Transformation (Extra è Generated Program HANA Transformation) of the DTP. The CalculationScenario tab is only visible if the Expert Mode (Extras è Expert Mode on/off) is switched on. Keep in mind, if the source object of the BW transformation is an InfoSource the CalculationScenario could only been reached by using the DTP Meta data, see »Dataflow with more than one BW transformation« and »Stacked Data Flow«.


The naming convention for the CalculationScenario is:


     /1BCAMDP/0BW:DAP:<Technical ID – SAP HANA Transformation>


The CalculationScenario shown in the CalculationScenario tab, see (2) and (1)in Figure 2.4, is only a local temporary version, therefore the additional postfix .TMP.


The CalculationScenario processes the transformation logic by using different views (CalculationViews) to split the complex logic into more simplified single steps. One CalculationView, the default CalculationView, represents the CalculationScenario itself. The default CalculationView uses one or more other CalculationViews as source and so on, see Figure 2.5. A CalculationView cannot be used in SQL statements, therefore for each CalculationView a ColumnView will be generated.

 

Inside the SAP HANA database, the ColumnViews are created in the SAP<SID>schema, see (1). Each ColumnView represents a CalculationView, see paragraph 2.3.2.1 »CalculationScenario - calculationViews«, within a CalculationScenario. The create statement of each ColumnView provides two objects. First the CalculationScenario and as second object the ColumnView based on the CalculationScenario. All ColumnViews that belong to a SAP HANA transformation are based on the same CalculationScenario.


Figure_2_4.png

Figure 2.4: CalculationScenario and ColumnViews


SAP HANA internally uses the JSON notation to represent a CalculationScenario. Figure 2.5 shows the CalculationScenario depicted in Figure 2.4 (2) in a JSON Analyzer. The tree representation provides a good overview how the different CalculationViews are consumed.  The JSON based CalcScenario definition can be found in the Create Statement tab in the column view definition in the SAP HANA studio. The definition can be found in the USING clause of the CREATE CALCULATION SCENARIO statement, the definition starts with ‘[‘ and ends with ‘]’.


Figure_2_5.png

Figure 2.5: CalculationScenario in a JSON Analyzer


The SAP HANA studio also provides a good tool to visualize a CalculationScenario, see Figure 2.6. To open the visualization tool click on Visualize View in the context menu of a ColumnView based on a CalculationScenario. The visualization view is divided in three parts. The first part (1) provides a list of the CalculationViews which are used inside the CalculationScenario. Depending on each view definition there are more information about variables, filter or attributes are below the view node available.  The second part (2) provides an overview about the view dependencies. Which view consumes which view? The third part (3) provides context sensitive information for a view form the second part.


Figure_2_6.png

Figure 2.6: CalculationScenario visualization in the SAP HANA Studio


Now we will have a deeper look into the following sub nodes of the calculationScenario node (see (2) in Figure 2.4):

  • dataSources
  • variables
  • calculationViews


2.3.1 CalculationScenario - dataSources


The node dataSources lists all available data source objects of the CalculationScenario. The following data sources are used within a CalculationScenario in the context of a BW transformation:

  • tableDataSource
  • olapDataSource
  • calcScenarioDataSource

In the first sample transformation, we only use a database table (tableDataSource) as the source object, see Figure 2.7. The sample data flow reads from a DataSource (RSDS), therefore the corresponding PSA table is defined as tableDataSource. To resolve the request ID, the SID table /BI0/SREQUID is also added to the list of data sources.


Figure_2_7.png

Figure 2.7: CalculationScenario – Node: TableDataSource


In the second sample a transformation rule Master data read is being used in the BW transformation. In this case an olapDataSource is added to the list of data sources. The olapDataSourceuses the logical index (0BW:BIA:0MATERIAL_F4) from the InfoObject to read the required master data from the real source tables, see Figure 2.8.


Figure_2_8.png

Figure 2.8: CalculationScenario – Node: OLAPDataSource


To read the master data in the requested language, object version and time the PLACEHOLDERS

  • keydate,
  • objvers
  • langu

are added.

 

The third sample is a stacked data flow. In a stacked data flow the CalculationScenario from the DTP uses another CalculationScenario as data source. In these cases, the calcScenarioDataSource is used. The variables defined in the upper CalculationScenario are passed to the underlying CalculationScenario to be able to push down these variables (filters) where possible to the source objects, see Figure 2.9.


Figure_2_9.png

Figure 2.9: CalculationScenario – Node: CalcScenarioDataSource

 

The values for the PLACEHOLDERS are passed in the SQL statement by using the variables, see paragraph 2.3.2 »CalculationScenario - variables«.

 

The variables for the PLACEHOLDER for the variables keydate, objvers and langu are always set in the INSERT AS SELECT statement, whether they are used or not.


2.3.2       CalculationScenario - variables


In the node variables, all parameters are defined which are used in the CalculationScenario and can be used in the SQL statement to filter the result, see Figure 2.10.


Figure_2_10.png

Figure 2.10: CalculationScenario – Node: variables


Placeholder usage by customer

All variables and placeholders defined in a CalculationScenario in the context of a BW transformation are only intended for SAP internal usage. Variables and placeholder names are not stable, that means they can be changed, replaced or removed.


Figure 2.11 provides a further sample, based on a BW transformation, with several dataSource definitions. A variable is been used to control which dataSource and at the end from which table the data are read by the SQL statement.

 

The sample data flow for this CalculationScenario reads from an advanced DSO (ADSO) (based on a Data Propagation Layer - Template) with three possible source tables (Inbound Table, Change Log and Active Data). For each source table (dataSource), at least one view is generated into the CalculationScenario and all three views are combined by a union operation, see (2).

 

The input nodes are used to enhance all three structures by a new constant field named BW_HAP__________ADSO_TABLE_TYPE. The constant values

  • Inbound Table (AQ),
  • Change Log (CL) and
  • Active Data (AT)

can later be used as values for the filter $$ADSO_TABLE_TYPE$$, see (3). The filter value is handed over by the SELECT statement and depends, for example, on the DTP settings (Read from Active Table or Read from Change Log). To read data only from the active data (AT) table the following placeholder setting is used:

 

     'PLACEHOLDER'=('$$adso_table_type$$',   '( ("BW_HAP__________ADSO_TABLE_TYPE"=''AT'' ) )'),

 

For further information see 2.4 »SQL Statement«.

 

Figure_2_11.png

Figure 2.11: CalculationScenario – DataSource and Variable collaboration


2.3.2.1 CalculationScenario - calculationViews


The next relevant node type is the node calculationView. A CalculationScenario uses several layered views (calculationView) to transfer the logic given by the BW transformation. For the CalculationScenario and for each calculationView, a ColumnView is created, see (1) in Figure 2.4. The CalculationScenario related to a column view can be found in the definition of each column view.

 

There are several view types which can be used as sub node of a calculationView:

  • projection
  • union
  • join
  • aggregation
  • unitConversion
  • verticalUnion
  • rownum
  • functionCall
  • datatypeConversion


The view types as well as the number of views used in a CalculationScenario depend on the logic defined in the BW transformation and the BW / SAP HANA release.


A SELECT on a CalculationScenario (ColumnView) always reads from the default view. There is only one default view allowed. The default view can been identified by checking whether the attribute defaultViewFlag is set to “true”. In the JSON representation in Figure 2.5, the default view is always shown as the top node.


The processing logic of each view is described in further sub nodes. The most important sub nodes are:

  • viewAttributesattributes
  • inputs
  • filter

 

The used sub nodes of a CalculationView depend on the view type, on the logic defined in the BW transformation, and the BW / SAP HANA release.


CalculationScenario - calculationViews – view - viewAttributes / attributes


The nodes viewAttributes and attributesare used to define the target structure of a calculation view. The node attributes is used for more complex field definitions like data type mapping and calculated attributes (calculatedAttributes).

 

The InfoObject TK_MAT is defined as CHAR(18) with a conversion routine ALPHA. To ensure that all values comply with the APLHA conversion rules, the CalculationScenario creates an additional field TK_MAT$TMP as a calculatedAttribute. Figure 2.12 shows the definition of the new field. The APLHAconversion rule logic is implemented as a formula based on the original field TK_MAT.


Figure_2_12.png

Figure 2.12: CalculationScenario - CalculationAttributes


Calculated attributes are memory-intensive and we try to avoid them where possible. But there are some scenarios where calculated attributes must be used. For example, in case of target fields based on InfoObjects with conversion routines (see Figure 2.12) and in case of “non-trustworthy” sources. A “non-trustworthy” data source is a field-based source object (that is not based on InfoObjects), for example a DataSource (RSDS) or a field based advanced DataStore-Object (ADSO). In case of “non-trustworthy” data sources, the CalculationScenario must ensure that NULL values are converted to the correct type-related initial values.


CalculationScenario - calculationViews– view inputs


A calculation view can read data from one or more sources (inputs). CalculationViews and/or data sources could be used as sources. They are listed under the node input. A projection or an aggregation, for example, typically have one input node, and a union or a join typically have more than one input node.

 

The input node in combination with the mapping and viewAttribute nodes can be used to define new columns. In Figure 2.13 the attribute #SOURCE#.1.0REQTSN is defined as a new field based on the source field 0REQTSN.


Figure_2_13.png

Figure 2.13: CalculationScenario - input

 

CalculationScenario - calculationViews– view filter


The filter node, see (2) in Figure 2.14, is used in combination with the variable, see (1)in Figure 2.14, to provide the option to filter the result set of the union-view (TK_SOI). The filter value is set as placeholder in the SQL statement see (3) in Figure 2.14


Figure_2_14.png

Figure 2.14: CalculationScenario - input

 

I will come back to the different views later on in the analyzing and debugging section.


2.4 SQL Statement

 

Running a DTP triggers an INSERT AS SELECT statement that reads data from the source and directly inserts the transformed data into the target object. There are two kinds of filter options available to reduce the processed data volume: the WHERE condition in the SQL statement and the Calculation Engine PLACEHOLDER. Placeholders are used to set values for variables / filters which are defined in the CalculationScenario. Which placeholders are used depends on the logic defined in the BW transformation and the BW / SAP HANA release.


PLACEHOLDER


The following table describes the most important PLACEHOLDERS which can be embedded in a CalculationScenario and used in the SQL statements. Which PLACEHOLDERis been used in the CalculationScenario depends on the implemented dataflow logic.


Important note about the PLACEHOLDER

PLACEHOLDER are not stable and the definition can be changed by a new release, support package or note!

It is not supported to use the here listed PLACEHOLDER inside a SQL Script or any other embedded database object in a context of a BW transformation. This also applies to all PLACEHOLDER used in the generated SQL statement.

 

 

Placeholder name

Description

$$client$$

Client value to read client dependent data. The placeholder is always set, whether it is used or not.

$$change_log_filter$$

This placeholder is only used to filter the data which is read from the change log. The filter values are equivalent to the values of the placeholder $$filter$$.

The placeholder is only used for advanced DataStore-Objects (ADSO). See also $$inbound_filter$$and $$nls_filter$$.

$$change_log_extraction$$

Wenn die Extraktion aus dem Changelog erfolgt ist der Wert ‘X’ gesetzt. Wird im Rahmen des Errorhandlings genutzt.

$$datasource_src_type$$

The placeholder is set to ‚X‘ in case the data would be read from the remote source object and not from the PSA.

$$dso_table_type$$

This placeholder is used to control which table of a standard DataStore-Object (ODSO) is used as data source. Therefore the field BW_HAP__________DSO_TABLE_TYPE can be set to:

  • Active Table (0)
  • Change Log (3)

$$filter$$

This placeholder is used to filter the source data where possible. That means the placeholder is typically used in the next view above the union for all available source tables.

This placeholder contains the filters defined in the DTP extraction tab plus some technical filters based on the REQUEST or the DATAPAKID.

In case of using an ABAP routine or a BEx-Variable in the DTP filter, the result of both are used to create the filter condition.

The placeholder $$filter$$ is used for all BW objects except advanced DataStore-Objects (ADSO). To filter an ADSO see $$inbound_filter$$, $$change_log_filter$$and $$nls_filter$$.

$$inbound_filter$$

This placeholder is only used to filter the data which is read from the inbound queue[KT1]. The filter values are equivalent to the values of the placeholder $$filter$$.

This placeholder is only used for advanced DataStore-Objects (ADSO). See also $$change_log_filter$$and $$nls_filter$$.

$$changelog_filter$$

This placeholder is only used to filter the data which is read from the change log. The filter values are equivalent to the values of the placeholder $$filter$$.

This placeholder is only used for advanced DataStore-Objects (ADSO). See also $$change_log_filter$$and $$nls_filter$$.

$$keydate$$

Date to read time dependent master data. The variable value is applied to the logical index of an InfoObject in an olapDataSource. The variable is always set, whether it is used or not.

$$langu$$

Language to read master data. The variable value is applied to the logical index of an InfoObject in an olapDataSource. The placeholder is always set, whether it is used or not.

$$navigational_attribute_filter$$

This placeholder lists the filters based on navigation attributes, which are used in the DTP filter.

$$objvers$$

Object version to read master data. The variable value is applied to the logical index of an InfoObject in an olapDataSource. The variable is always set, whether it is used or not.

$$runid$$

For some features, it is necessary to store metadata in a temporary table. In a prepare phase, the metadata is inserted into this temporary table identified by a unique id (runid). During runtime, the values are then read by using the runid of this placeholder.

This placeholder is primarily used in a CalcScenario based on an explicitly created SAP HANA Analysis Process (see transaction RSDHAAP) and not a SAP HANA Transformation.

$$target_filter$$

The target filter is used in a SQL statement to ensure that only those records of the result set which match this filter condition are inserted into a target object. This placeholder is used if the filter condition is given by the target object, for example by a semantically partitioned Object (SPO). A target filter is applied to the result set of the transformation.

$$datasource_psa_version$$

Relevant version number of the PSA where the current request is located.

$$DATAPAKID$$.DTP

This value is set by the package size parameter maintained in the DTP. More information, especially for dependencies, can be found in the value help (F1) for the DTP package size parameter.

$$REQUEST$$.DTP

This placeholder contains the request ID for the target request.

In the simulation mode, the value is always set to the SID value of the request ID DTPR_SIMULATION in the request table /BI0/SREQUID.

 

 

 

 


The following SQL SELECT statement, see Figure 2.15, belongs to the CalculationScenario as shown in Figure 2.11. The SQL statement passes the value for the variable adso_table_type. The placeholder

 

     'PLACEHOLDER'=('$$adso_table_type$$',   '( ("BW_HAP__________ADSO_TABLE_TYPE"=''AT'' ) )'),

 

sets the field BW_HAP__________ADSO_TABLE_TYPE to ’AT’. In the union definition in Figure 2.11, see (2), the field BW_HAP__________ADSO_TABLE_TYPE is set to the constant value ’AT’for all rows provided by the active data table. Like this, the placeholder ensures that only data from the active data table is selected.

 

Figure_2_15.png

Figure 2.15: DPT - SELECT statement

 

Some PLACEHOLDERS get the transformation-ID as a prefix to ensure that these PLACEHOLDERidentifiers are unique, see placeholder $$runid$$ in Figure 2.15 and the calcScenarioDataSource description in the blog HANA based BW Transformation

 

 

HANA based BW Transformation - Analyzing and debugging

$
0
0

3 Analyzing and debugging HANA based BW Transformations


This blog is part of the blog series HANA based BW Transformation


In this part I’ll describe how a BW transformation can be analyzed and debugged if they would be executed in the database. Debugging refers primarily BW transformation with a SQL Scripts but in this part I’ll also describe the way to get the INSERT / SELECT statement.

3.1      Analyzing HANA based BW transformation

 

When a DTP with the option “HANA Execution mode” is executed the runtime framework will generate a SQL statement.  The SQL statement will select the data from the source and insert the data directly into the target in one step. Therefore the generated SQL statement (INSERT AS SELECT) will be used in the ColumnView which is based on the CalculationScenario (SAP HANA Transformation), see blog HANA based Transformation (deep dive).

 

The DTP runtime framework provides the option to generate various SQL statements in simulation mode without executing them, see Figure 3.1. The SQL statements can then be used for further analyses purposes.

 

A SQL query based on a CalculationScenario (encapsulated in a ColumnView) is executed by the calculation engine (CalcEngine). Variables of a CalculationScenario are passed on by PLACEHOLDER to the CalcEngine, i.e. the DTP filter values would be handover by one or more filter PLACEHOLDER, depending on the designed data flow. Therefore we will not find the DTP filter directly in the WHERE condition.

 

In addition to DTP filter values there are more technical filters passed on by PLACEHOLDER. For example, one placeholder is used to read data in packages. Another is used to specify which source table will be used to read the data.

 

In addition to filtering a SQL statement based on a CalculationScenario in the form of PLACEHOLDERvalues it is also possible to use an additional WHEREcondition. This is, for example, necessary if no variables are defined inside the CalculationScenario for these field, see WHERE condition of INSERT AS SELECT statement in Figure 3.1.

 

Figure_3_1.png

Figure 3.1: Generated SQL Segments based on the CalculationView”


In case ABAP logic is used to specify the DTP filter values, the DTP framework will execute the ABAP logic in a pre-step before generating the SQL statement. The resulting DTP filter values from the ABAP logic will be integrated into the SQL statement.  The following filter placeholder (TR_3UFAFGH3GRHWINN06BPB0WEA91.$$filter$$) includes the technical filter condition based on the fields REQUEST and DATAPAKID to build the packages additionally a DTP filter for the field COSTCENTER:

 

    'PLACEHOLDER'=('TR_3UFAFGH3GRHWINN06BPB0WEA91.$$filter$$', 

                  '( "COSTCENTER"=''0000001000'' ) AND

                    ( ( "REQUEST" = ''REQU_CY3V82RJ5IBTOIYX0XK0OJTD5'' AND

                      ( "DATAPAKID" >= ''000001'' AND "DATAPAKID" <= ''000009'' ) ) ) '),

 

In this sample the COSTCENTER value comes from an ABAP based filter (Selection type 6 (ABAP Routine)).

 

The DTP simulation mode, see Figure 3.1, could be used to generate the SQL statements to analyze the data read requests.  The simulation mode does not execute the generated SQL statements, neither the SELECT nor the INSERT statement. In simulation mode both statements are only generated for analyses purposes.

 

The first step to analyze a BW transformation with a SAP HANA Expert Script is to check the procedure import and export data. In many cases the issue can be identified by checking the result data. For example for initial or NULL values, or too many rows, etc.

 

Next I’ll describe the steps to check the input and / or the output data of a SAP HANA Expert Script.

 

Figure 3.2  shows in (1) a data flow with a BW transformation with a SAP HANA Expert Script (named as SQL).


(2) shows the CalculationViews from the corresponding CalculationScenario for the BW transformation.


CalculationScenario in a stacked data flow
Keep in mind that the data flow is a stacked one as explained in the blog »HANA based Transformation (deep dive)« the CalculationScenario from the BW transformation will be reused in case of a stacked data flow!


We need the three marked CalculationViews to investigate the procedure behavior:

  • OPERATION.FUNCTION_CALL.INPUT
  • OPERATION.FUNCTION_CALL
  • OPERATION.FUNCTION_CALL.OUTPUT


Figure_3_2.png

Figure 3.2: Analyzing a SAP HANA Expert Script

 

To analyze each view it is necessary to capture the SQL statement from the DTP simulation mode and change some small parts, see (3) in Figure 3.2 :

 

  1. Change the named field list to »*«. This is necessary because the list could differ from the overall view
  2. Add the digit 1 at the end of the ColumnView name. This step is necessary because our SAP HANA Expert Script is embedded BW transformation of a stacked data flow.
  3. Add the CalculationView name separated by a dot to the ColumnView name


The CalculationView OPERATION.FUNCTION_CALL.INPUT can be used to check the procedure input parameter inTab. Both CalculationViews, OPERATION.FUNCTION_CALL and OPERATION.FUNCTION_CALL.OUTPUT, deliver the same data content. The column naming could be different but the content is the same. That means both views can be used to analyze the procedure result table outTab.

 

If the input and output data of the procedure does not help to identify the unexpected procedure behavior it is necessary to go deeper into the procedure analyses. It is possible to debug the corresponding database procedure for the SAP HANA Expert Script.

 

A new AMDP debugger for AMDP procedures is available with SAP NetWeaver 7.50 (SAP HANA Revision 97 or higher required).


3.1.1      Temporary Storage


Similar to the ABAP mode it is possible to keep the result after a simulation in a temporary storage. For a BW transformation running in the SAP HANA mode it is only possible to keep the result data after the whole transformation. It is not possible to keep intermediate result as in the ABAP mode. The reason is the runtime behavior. A SAP HANA executed BW transformation processes the data in chunks, see INSERT AS SELECT in the first blog of the series HANA based BW Transformation.

 

Figure 3.3 shows the necessary steps to keep the result data available if the DTP is run in simulation mode.  To achieve this enable Expert Mode, see (1) and ensure that the flag After Transformation is set, see (2).


Figure_3_3.png

Figure 3.3: Temporary Storage – HANA


In the DTP monitor you can check the result if you expand the Data Package node, see (3). The icon can be used to inspect the data in detail, see (4).


3.1.2      Debugging in BW 7.40 and SAP HANA < SP09


To enable customers on an older release and / or older revision, the BW transformation framework provides an option to generate a prepared database debug procedure to run the SAP HANA Expert Script in a native SAP HANA runtime environment.

 

Feature availability
The feature described here to create a debug procedure should only be used in a BW 7.40 system. In a release higher than BW 7.40 it is recommended to use directly the AMPD debugger, see paragraph »AMDP Debugger (BW 7.50 and HANA SP09 or higher)«. The feature will be disabled in BW 7.50.


To check if the AMDP debugger is available on your system just try to add a break point in your AMDP method PROCEDURE, see Figure 3.4. Just double click in the field to the left of the line number to add an AMDP break point. If the AMDP debugger is available the line will be marked with a dot. If the debugger is not available you will get a message like in figure 3.4. If the AMDP debugger is available it is not necessary to generate a debugger procedure and you can go forward to paragraph »AMDP Debugger (BW 7.50 and HANA SP09 or higher)«.


Figure_3_4.png

Figure 3.4: AMDP Debugger in BW 7.40


If no AMDP debugger is available we can create a debug procedure to analyze the SAP HANA Expert Script behavior. Below I will describe the steps to analyze an SAP HANA Expert Script.

 

The AMDP debug procedure can be generated in the DTP simulation mode. Switch the processing mode of the corresponding DTP to Serially in the Dialog Process (for Debugging), see (1) in Figure 3.5 and enable the Expert Mode and execute the simulation. Switch to the tab Script Generation In the upcoming dialog Debug Request, see (2)and mark the SAP HANA Expert Script below your BW transformation. In case of a stacked data flow with more than one SAP HANA Expert Script (and more BW transformations), the list Generate Procedure for will offer one checkbox for each SAP HANA Expert Script per BW transformation. If you’ve marked the procedure you want to create, execute the debug request. In the result monitor of the simulation request is a new tab called Script Generation, see (3). Copy the entire procedure name into your clipboard (CTRL + C) and do not close the request monitor.


Figure_3_5.png

Figure 3.5: Steps to create an AMDP debug procedure

 

Next we will prepare for debugging. To start, it is necessary to create a debug configuration. Before we start to create the debug configuration we will add a filter on the procedure folder to reduce the list of available procedures in your system, see (1) in Figure 3.6. The debug configuration dialog does not offer any search functionalities. If the filter is applied you may notice some additional procedures. These are used internally and can be disregarded.

 

Do not remove the filter on the procedure folder. We need the filter later on if we create the debug configuration.


Figure_3_6.png

Figure 3.6: Prepare database procedure debugging

 

Open the <CLASSNAME>=>DEBUG_PROCEDUREand add a breakpoint (2) on each step where you want to stop to inspect your code during the execution. Keep in mind that the SQL Script debug only has the continue button to execute the code until the next breakpoint or the end. There is no option like Step In or Step Over as in the ABAP Debugger. As a consequence it is necessary to add a breakpoint on each code position before you start the process.


Structure of the debug procedure

The debug procedure is divided in two parts. The first part of the procedure is to create the input table (inTab) for the SAP HANA Expert Script procedure, see Figure 3.6 line 11 to line 45. To get the correct input data for the procedure the corresponding CalculationView from the CalculationScenario will be selected. Therefore the view name OPERATION.FUNCTION_CALL.INPUT is added to the ColumnView name in the select statement.

Below the first part we’ll find the copied source code from the SAP HANA Expert Script.


Table for temp placeholder values during debugging

The placeholder values will be read from the table /1BCAMDP/0BW:DAP:PLACEHOLDER_TABLE to keep the debug procedure as generic as possible.

Do not use the table inside customer coding!


Next we will create a debug configuration. Therefore we switch to the debug perspective (Window => Perspective => Open Perspective => Other… => Debug).

 

To create a new debug configuration open the Debug Configurations dialog (Run => Debug Configurations…), see Figure 3.7. For reuse purposes you can name your new debug configuration (1). Ensure that the option Procedure to Debug is selected and choose Catalog schema(2). Use the Browse… button (3) to select your debug procedure. Select your debug procedure in the upcoming dialog from the Procedure folder in your database schema, see (4).


Figure_3_7.png

Figure 3.7: Create Debug Configuration

 

The button Debug will store the debug configuration and starts the debugging process immediately. The debug perspective provides several views to get information about the procedure execution. Figure 3.8 shows the most important views in the context of SAP HANA database procedure debugging.

 

The Debug View(1) displays the stack frame for the suspended threads for the procedure you are debugging. On the lowest level you can see the number of the current processing lines.

 

The SQLScript view(2) shows the database procedure source code. On the left side next to the line number you can see a pointer indicating the currently processing line number (here line 72 at the end of the procedure).


Figure_3_8.png

Figure 3.8: Database procedure debugging

 

The Variable view(3) shows all defined variables. For a scalar variable, the value is displayed directly within the view (see column value). For variables based on a table type only the number of rows will be displayed in the view. For further detail of a table type variable opens the data preview (context menu of the table type variable => Open Data Preview). The data preview can be used to inspect the result, see (4). Further on the data preview provides the option to inspect the data types of each column, see (5). This feature is quite interesting in case of dynamic generated columns based on constant selection or calculations.

 

3.1.3 AMDP Debugger (BW 7.50 and HANA SP09 or higher)

 

The blog How to debug ABAP Managed Database Procedures using ADT - Basics provides good basic information around AMDP and the new AMDP Debugger.

 

To check if the AMDP debugger is available in your release just try to create an AMDP breakpoint. The procedure is described in paragraph »Debugging in BW 7.40 and SAP HANA < SP09«, see above.

 

In this paragraph I’ll provide a step by step description to debug a SAP HANA Expert Script by using the AMDP debugger.

 

To use the AMDP debugger open the BW transformation with the SAP HANA Expert Script. It is necessary to use the embed version of the Data Warehousing Workbench (RSA1) within the Modeling Tools for SAP BW powered by SAP HANA.

 

Use the button Expert Routine in the BW transformation UI, see (1) in Figure 3.9, to open the corresponding AMDP class with the implemented AMDP procedure (2).


Figure_3_9.png

Figure 3.9: Prepare AMDP debugger

 

An AMDP breakpoint can added by double clicking to the left of the line number. For each AMDP breakpoint a dot will be created. In case the AMDP debugger is active the dot will be green and the tooltip also provides information about the AMDP debugger status. In case the AMDP debugger is not active the dot is gray and the AMDP debugger can be activated by double clicking the gray dot.

 

The breakpoint context menu provides further AMDP debugger options.

 

To start the AMDP debugger it is necessary to execute the DTP in the processing mode Parallel SAP HANA Execution, see (1) in Figure 3.10. Do not run the DTP in the simulation mode! The DTP simulation mode for a BW transformation will not execute the generated SQL statements.

 

Figure_3_10.png

Figure 3.10: Start the AMDP debugging process

 

If the process is caught by the AMDP debugger the popup dialog, see (2) in Figure 3.10, Confirm Perspective Switch will be appear. If you confirm the dialog the IDE switches to the debug perspective, see Figure 3.11.

 

The debugging process now is similar to the database procedure debugging in paragraph »Debugging in BW 7.40 and SAP HANA < SP09«. (1) shows the call stack, (2) shows the procedure source code, (3) provides an overview of the available variables from the procedure and (4) shows a data preview.

 

Figure_3_11.png

Figure 3.11: Debugging an AMDP method / procedure

 

 





Step by Step Extracting ECC tables to SAP HANA using BODS and consuming data to SAP BW Powered by HANA

$
0
0

Business Scenario

 

To extract sales order tables- VBAK (sales document header data) and VBAP (Sales document Item data) from ECC to SAP HANA and further consume data to SAP BW powered by HANA.

We are using BODS, an external server to extract these tables.

 

Purpose-


To make you understand How to Extract SAP ECC tables in to SAP HANA and also modelling in SAP HANA and further consuming data to SAP BW powered by HANA.

 

Prerequisite—


  • HANA database upgraded to version HANA SPS 06 and above.
  • Knowledge of BO Data services.

 

In order to consuming HANA DB views to BW there are mainly three objects—

  1. 1> Transient Provider
  2. 2> Virtual Provider
  3. 3> Composite Provider

 

In this example we will use Transient Provider.

 

To make it more convenience to understand I split this scenario into Phase wise—

 

PHASE 1--


Step1>    Log in to BODS

               Create the Datastore connected to ECC.

 

1.PNG

 

Give the Credentials and click on advance where you need to give instance number .Apply and OK.

 

2.PNG

 

Observe the Newly created Datastore for ECC tables

 

3.PNG

 

1> create the datastore connected to HANA (Template Table)

 

4.PNG

 

Observe newly created Datastore

 

5.PNG

 

2> Now We will create the Project.Go to Help menu select start page.

 

6.PNG

3> Create the batch job.

     Select the Project--> New Batch Job

7.PNG

 

4> Create the Dataflow.

     Click on right panel of Batch job screen-->Add New-->Data flow

8.PNG

Rename the Data flow

 

9.PNG

 

5>  MODEL THE FLOW.

Import the ECC TABLES (VBAK,VBAP) AND USE AS A SOURCE.

Use the Template table as target for HANA datastore.

 

Now we are on Data flow screen, we need to add the ECC tables (VBAK and VBAP) in to Datastore (MY_PROJ2_ECC_DATASTORE) as a source and Template table as a target.

 

First we will add ECC table—

Select the Datastore MY_PROJ2_ECC_DATASTORE-->Expand-->Select Table-->right click-->Import by Name

 

10.PNG

 

11.PNG

 

Give the VBAK table name and click on Import button, likewise add one more table VBAP. Observe the added tables.

12.PNG

 

Now drag and drop the VBAK and VBAP tables in to Dataflow screen. These table we are taking as source.

 

13.PNG

 

Now Drag and Drop the Template table as a Target and assign the Target name as well as Schema name.

  Here Owner name is Schema name (In SAP HANA).

14.PNG


6>  Create the query transformation--

     Here we can maintain the joins and apply the Filters as per requirement.

     Right click on Dataflow screen—>Add new-->Query

 

15.PNG


7>  Map the Tables to Query and Query to Template Tables.


16.PNG


8>  Double click to Query to pass the desired fields from VBAK and VBAP fields.

     I have transferred few field from Schema IN to Schema Out from VBAK and VBAP tables by drag and drop.Here we can also write the statements, apply      the clauses to filter the data.


     Save all, Validate.


17.PNG


Click on Back Button, You will land on Dataflow screen

Again save all.

 

9>  Execute batch job

 

     18.PNG

 

Job has executed successfully.

 

In the PART-2 we will see, How to do modelling in SAP HANA on extracted ECC tables using BODS and also consume HANA model in SAP BW Powered by HANA.

 

      

      


         

Step by Step Extracting ECC tables to SAP HANA using BODS and consuming data to SAP BW Powered by HANA. PART-2

$
0
0

We saw how to do Extraction of ECC tables to SAP-HANA using BODS.in First part -

 

Step by Step Extracting ECC tables to SAP HANA using BODS and consuming data to SAP BW Powered by HANA

 

Now in this part we will create Calculation view on same Extracted ECC tables in to HANA and also consume this calculation view in SAP BW powered by HANA.

 

 

Step1> Log into SAP HANA .

           Go to Catalog-->Expand-->Expand the Schema (which we have given while creating Target Table as a owner name )  --> Expand Schema--> Expand Table

           Observe the table-- VBAKVBAP

 

          1111.PNG

 

     Check the Definition and Content of Table by Right click on Table.

 

     19.PNG

 

     Step 2> Now we will create Calculation view based upon the Table—“VBAKANDVBAP” with Data category Cube.

      

     Select your Package-->Right click-->New-->Calculation view

      

          20.PNG

 

               Click OK

 

          21.PNG

 

          Here we will Drag and drop the VBAKANDVBAP table to projection or by clicking on plus icon.

 

          22.PNG

 

          Select the desired fields by clicking on Circle icon right before the field name from Details Output

 

          23.PNG

 

         Step 3> Create calculated column.

 

                    Under Column folder I have NETWR field “Net value of the Order item in document currency”

                    Business requirement is to add one more filed is called VAT.

                    Formula for VAT is “NETWR”*0.5/100

 

        24.PNG

 

          Click on Calculated Column Folder-->Right click-->New

           

 

          25.PNG

 

          Click on Validate Syntax for any Error.

          Observe the VAT field under calculated column.

 

          26.PNG

 

          Save and Activate

          See the Raw Data and Observe the Newly added field "VAT"

 

          27.PNG

 

       Step 3> Apply Rank node.

 

     Business Scenario -- is to generate the one report which will show TOP 4 sales document date wise.I have made some changes in calculation view.

      Do below mentioned things as it is , like below pic--


     Remove “VBAKANDVBAP” table from projection. Drag and Drop "Rank" node to the modeling panel.

     Now add “VBAKANDVBAP” table to RANK node.

     Add Rank node to Projection node.

 

     28.PNG

 

     Now will define RANK but first we need to know the functionality of RANK.

 

     Threshold—this value is used by the system to filter the result data set after it calculates the rank.

     Order By—this column is used to perform Order By after the system performs partition.

     Partition—this column is used to partition the source data set based on the columns we define.

 

     When you Drag and drop the RANK node to the right panel. check the RANK node properties.

 

     Give the values like mentioned in below--

    

      Sort Direction-- Descending (Top N)

      Threshold- 4

      Order By- VBELN

      Partitioned- ERDAT

 

     29.PNG

     NOTE—Make sure at each node, we have selected the fields which you want to see in report output.

    

     Go to Aggregation level, Select the measures (NETWR,WAERK and VAT) -->Right click-->Add as a Aggregated Column.     

    

     30.PNG

      Go to Semantic--> Under Column Tab.

    

     Mark All measures fields as Measure and rest of the fields as a Attribute.

 

     31.PNG

     Save and Validate.

     Data Preview

     Raw Data

 

     Observe the Raw data based upon your Rank fields.

 

     32.PNG

     we have successfully created Information view in HANA DB.

 

     PHASE 3—


    

     So far we have Extracted ECC tables into HANA DB and there we have created calculation views and performed some operations, now we will consume      this calculation view in to SAP BW by using Virtual provider.

 

     Step 1> Log in to SAP BW.

    

     Go to System menu-->Status. My Database System is HDB.


     33.PNG


     Click on component (Magnifying glass symbol)—


     34.PNG

 

     Now we will create Transient Provider to consume HANA model (calculation view) in SAP BW powered by HANA

 

     NOTE--


     In Transient Provider Analytical/ Calculation views allowed.

     Creation of Bex allowed.

     Multi provider allowed--- There is a new tab has been added in Multiprovider to add Transient data.

     Composite Provider is allowed.

     Master data is allowed but only for Display Attribute.


    

     Step2 > Create Infoarea


     Step 3> T-CODE- rsdd_hm_publish


     Enter your same package name under which you have designed your calculation views.

 

     35.PNG

 

     Click on create, a new window will appear. Enter the InfoArea name.

 

     36.PNG

     Observe Characteristics Tab, give the Ref. InfoObject name if any

 

     37.PNG

 

     Observe Key figures Tab

 

     38.PNG

 

     Here Characteristics are what we had defined as Attribute in HANA and Key figures are measures.

     Here name of Infoprovider is @3CALV_VBAKANDVBAP

     Transient Provider names start with @3 followed by information views name.


    

     Transient Provider are Virtual provider.

          Go To SE-->@3CALV_VBAKANDVBAP-->Display

         

          No Table Exist


     We can create Transient Provider directly in Production system, but with the help of RSLTIP we can create Transport request in Dev.

          Go to --> SE11--> RSLTIP--> Display-->Content

    

     39.PNG

 

     Execute


     40.PNG

        

     We can create Transport request form Table Entry menu and move the object.

    

     Now you can create the Query on Infoprovider- @3CALV_VBAKANDVBAP and observe the report.



Open ODS View - How to Solve the Error - Fail to create scenario: [2950] User is not authorized

$
0
0

Hi All,

 

Good Day !!!!!!

 

 

Problem Statement - Open ODS View Authorization Error:

 

When we create ODS View on top of tables from external schema,

We might face "Insufficient Privilege" (Error: Fail to create scenario: [2950] User is not authorized) error during activation. 

In this blog, I have explained the cause and solution for this issue with one example.



System Details:


Assume BW application (DW1) is running on HANA database (HD1).

temp.PNG

From BW view,

"SAPDW1" schema will be considered as SAP BW schema and

"HDBUSER_1" schema (native HANA DB schema) will be considered as external schema.

 

 

System Version:

 

The version of the systems, which I used for below example.

 

BW:                                   7.40 SP11

HANA:                               1.0 SP10

Eclipse Modelling Tools:       1.13.3

 

 

Open ODS View Creation:


I am going to create ODS View "ZMS_ODS" on top of table "SALES" which is from "HDBUSER_1" schema.


Steps:


1. In eclipse, modelling perspective, connect SAP HANA system.


2. Navigate to "BW Modelling Tools" perspective and connect SAP BW system.


3. From BW system, attach SAP HANA system


4. Then create ODS View from BW system.

     Select "Semantics Type" as Facts

     Select "Source Type" as Database Table or View

     Type "DB Object" as  HDBUSER_1

     Type "DB Object Name" as SALES and

     Click "Finish"

      temp.PNG


5. Select "CUSTOMER_ID" and "PRODUCT_ID" as "Key Fields" and

    Select "ACTUAL_SALES" as "Key Figure".

    Check field properties (description, aggregation...), then activate the ODS View.

     temp.PNG

 

 

Problem Description:

 

Activation of ODS View will throw the error.

"column store error: fail to create scenario [2950] user is not authorized"

 

temp.PNG

 

 

Cause and Solution:

 

ODS View is BW object and It will interact with HANA DB using SAP<SID> user.

 

So, If we create ODS view on top of external table,

SELECT privilege on this external table should be granted to SAP<SID> user.

Our case, select privilege on "HDBUSER_1"."SALES" (external table) should be granted to "SAPDW1" (BW user).

 

The below SQL has to be executed to solve ODS View authorization issue.

GRANTSELECTONSCHEMA"HDBUSER_1"TO"SAPDW1"

 

After required privileges granted to BW user, Open ODS View is activated successfully.

created ODS view screenshot from "Eclipse Modelling Tools" and "SAP Logon GUI" have been attached below.

 

temp.PNG

 

Eclipse Modelling ToolsSAP Logon GUI
temp.PNGtemp.PNG

 

 

Data Preview of ODS View:

 

I checked the ODS View data from SAP BW system. It shows the data correctly.

 

SAP HANA: Table ("HDBUSER_1"."SALES")SAP BW: Open ODS View (ZMS_ODS)
temp.PNG

temp.PNG

 

Best Regards,

Muthuram

HANA Memory Consumption by BW Objects

$
0
0

Hi All,

 

Good Day !!!!!

 

We know M_CS_TABLES, M_TABLES to monitor about HANA RAM memory consumption by table wise.

But We can't find the information such as How much HANA Memory occupied by BW objects - Info Objects, DSO, Info cube, PSA, Change log.

 

This blog helps to identify HANA memory consumption by BW objects - Info Objects, DSO, Info cube, PSA, Change log.


1. Open the attached SQL document - HANA_MEMORY_CONSUMPTION_BY_BW_OBJECTS.sql


2. In the SQL statement, One small change has to be done. Replace String - SAPSID with your BW SCHEMA in all the places.

    Example:

     If your BW Schema is SAPBW1,

     When "SCHEMA_NAME" = 'SAPSID' --> When "SCHEMA_NAME" = 'SAPBW1'

   from"SAPSID"."RSTSODS" -->   from"SAPBW1"."RSTSODS"


3. Execute the SQL statement and User Defined Function - HANA_MEMORY_CONSUMPTION_BY_BW_OBJECTS will be created on User Schema.


4. Call UDF Function and It will show HANA Memory occupied by BW objects - Info Objects, DSO, Info cube, PSA, Change log

    Select * from"<User_Schema>"."HANA_MEMORY_CONSUMPTION_BY_BW_OBJECTS" ()

     temp.PNG

    

 

It will help us to monitor HANA Memory occupied by all BW objects and to take decisions like PSA_Cleanup and Changlog_Cleanup.

 

The SQL further can be enhanced like calculating

1. ADSO Memory consumption,

2. Info Objects Memory Consumption by types Master Data, Text, Hierarchy

 

Regards,

Muthuram


HANA based BW Transformation - New features delivered by 7.50 SP04

$
0
0

4 HANA based BW Transformation - New features delivered by 7.50 SP04


This blog is part of the blog series HANA based BW Transformation.


Following new features are shipped with the BW 7.50 Support Package 04 (feature pack):

  • SQL Script (AMDP) based Start-, End-, and Field Routines
  • Error Handling

 

4.1 Mixed implementation (HANA / ABAP)

 

In a stacked data flow it is possible to mix HANA executed BW transformation (Standard push down or SQL Script) and ABAP Routines. In case of a mixed scenario it is important that the lower level BW transformations are HANA push down capable. Lower level means the transformation is executed closer to the source object.

 

Figure 4.1 shows a stacked data flow with one InfoSource in between, see (1). The upper BW transformation (2) contains an ABAP start routine, therefore only the ABAP runtime is supported. In the lower BW transformation (3) only standard rules are used, therefore both the HANA and ABAP runtime are supported.

Despite the fact that in the data flow an ABAP routine is embedded, the DTP setting does support the SAP HANA execution, see (4) if the SAP HANA execution flag is set and the processing mode switch is set to (H) Parallel Processing with partial SAP HANA Execution, see (5).


Figure_4_1.png

Figure 4.1: HANA and ABAP mixed data flows

 

4.2      Routines

 

SAP Help: What's new - Routines in Transformations?

 

With BW 7.50 SP04 all BW Transformation routines can be implemented in ABAP or in SQL Script. Figure 4.2 shows the available routine types in a BW transformation context.

 

Figure_4_2.png

Figure 4.2: Available Routines in BW transformations

 

With BW 7.50 SP04 the concept and therefore the menu structure to create / delete a new routine changed. With BW 7.50 SP04 all routines, Start-, Field-, End- and Expert-Routines can be implemented ABAP or SQL Script based. It is not possible to mix ABAP and SQL Script routines within one transformation.

 

Figure_4_3.png

Figure 4.3: Routines in BW transformation

 

The transformation framework always tries to offer both execution modes, ABAP and HANA. For more information see the main blog of this series.

 

By implementing the first routine of a BW transformation the system asks for the implementation type (ABAP or SQL Script (AMDP Script)). Figure 3.1 shows the different routine implementation types and the impact on the execution mode of the selected implementation type.

 

Figure_4_4.png

Figure 4.4: Routine implementation type

 

Initially both execution modes (1), ABAP and HANA, are possible (unless you are using a feature which prevents a push down). The implementation type decision for the first routine within a BW transformation sets the implementation type for all further routines within this BW transformation.  The dialog (2)will only come up for the first routine within a BW transformation. If you choose ABAP routine for the first routine the Runtime Status will change from ABAP and HANA runtime are supported to Only ABAP runtime is supported (3). If you choose AMDP script for the first routine the Runtime Status changes to Only HANA runtime is supported(4).

 

 

4.2.1       General routine information

For each SQL Script (Start, End, Field and Expert) routine a specific AMDP - ABAP class is created. For more information about the AMDP - ABAP class see paragraph 1.2.2.2 »The AMDP Class« in the initial blog »HANA based BW Transformation« of this blog series.

 

Only the method body (including the method declaration) is stored in the BW transformation metadata. You can find the source code of all BW transformation related routines (methods / procedures) in the table RSTRANSTEPSCRIPT.

 

Table replacement for expert routines
Up to BW 7.50 SP04 the procedure source code for SAP HANA Expert Scripts is stored in the table RSTRANSCRIPT. With BW 7.50 SP04 the storage location for AMDP based routines has been changed. With BW 7.50 SP04 the source code for all AMDP based routines (in the context of a BW transformation) will be stored in the table RSTRANSTEPSCRIPT

 

The column CODEID provides the ID for the ABAP class name. To get the full ABAP class name it is necessary to add the prefix “/BIC/” to the ID.

 

The generated AMDP - ABAP classes are not transported. Only the metadata, including the method source code are transported. The AMDP – ABAP classes are generated in the post processing transport step in the target system during the BW transformation activation.

 

4.2.2       Routine Parameter

 

Parallel to the routines (Start-, End- and Field-Routines), error handling was also delivered with BW 7.50 SP04. As a result the method declaration has changed, including the SAP HANA Expert Script. To keep existing SAP HANA Expert Scripts valid the method declaration will not change during the upgrade to SP04, for more information see paragraph 4.2.3 »Flag - Enable Error Handling for Expert Script«.

 

4.2.2.1 Field SQL__PROCEDURE__SOURCE__RECORD

 

The field SQL__PROCEDURE__SOURCE__RECORD is part of all structured parameters except the output structure of a field routine. The field can be used to store the original source record value of a row.

 

Figure 4.5 shows an example how to handle record information during the transformation and the error handling. In the example data flow the source object is a classic DataStore-Object (ODSO).

 

The sample data flow uses two transformations both implement a SQL Script (Start-, End- or Expert- Routine).

 

The inTab of the first SQL Script (AMDP Script (1)) contains information about the source data in this example the source object provides technical information to create a unique identifier for each row. If you are reading data from the active table (see DTP adjustments) of an ODSO it is not possible to get the necessary information from the source. In this case both columns RECORD and SQL__PROCEDURE__SOURCE__RECORD are set to initial. The SQL Script does not contain logic to handle erroneous records.

 

If the source provides technical information to create a unique identifier both columns RECORDand SQL__PROCEDURE__SOURCE__RECORD will be populated. In the inTab the content of both columns are the same. The columns will be created by concatenating of the technical field REQUID, DATAPAKand RECORDfor an ODSO and REQUESTTSN, DATAPAKand RECORDfor an ADSO.

 

The field REQUEST (see Source in Figure 4.5) cannot be used as an order criteria, because of the generated values. Therefore the related SID (see /BI0/REQUID in Figure 4.5) is used.

 

The business logic requires to multiply some source rows. To get a unique sort able column the column RECORDis re-calculated with new unique sorted values. For the multiplied records the source record information in the column SQL__PROCEDURE__SOURCE__RECORD is untouched.


Figure_4_5.png

Figure 4.5: Source record information

 

The second BW transformation also contains a SQL Script (AMDP Script (2)) to verify the transferred data and identify erroneous records. The row with the record ID 4 is identified as an erroneous record. Therefore a new entry with the original record ID from the source object is written to the errorTab. The original record ID is still stored in the column SQL__PROCEDURE__SOURCE__RECORD.

 

This example explains the purpose off the column SQL__PROCEDURE__SOURCE__RECORD. I’ll provide more details about the error handling in paragraph 4.3 »Error Handling«.

 

4.2.2.2       Common Parameter

 

The following parameters are available in all new SQL Script routine created after the upgrade to BW 7.50 SP04:

  • I_ERROR_HANDLING (field),
  • errorTab (table) and

 

In addition the field SQL__PROCEDURE__SOURCE__RECORDis a member the importing parameter inTab and the exporting parameter outTab, with the exception of the field routine exporting parameter outTab.

 

The error handling related parameters (I_ERROR_HANDLING,errorTaband the additional field SQL__PROCEDURE__SOURCE__RECORD) are only available if the flag Enable Error Handling for Expert Script is set, see paragraph 4.2.3 »Flag - Enable Error Handling for Expert Script«.

 

 

 

There is a special handling for existing SAP HANA Expert Scripts which were created before upgrading to SP04. To preserve the customer code, for existing SAP HANA Expert Script the flag is not set by default. Therefore the error related parameters are not been added for existing SAP HANA Expert Scripts.

The input parameter I_ERROR_HANDLINGis an indicator to mark the current processing step as error handling step, for further information see paragraph 4.3 »Error Handling«.

 

The export parameters errorTabis used as part of the error handling to identify erroneous records, for further information see paragraph 4.3 »Error Handling«.

 

All output table parameter of an AMDP method must be assigned. Otherwise the AMDP class is not valid. In case you are not using the error handling the output table parameter errorTabmust be assigned by using a dummy statement. The following statement can be used to return an empty errorTab:

 

  errorTab =

    SELECT '' AS ERROR_TEXT,

           '' AS SQL__PROCEDURE__SOURCE__RECORD

      FROM DUMMY

     WHERE DUMMY<> 'X';

 

 

 

 

4.2.2.3       Start- and End-Routine Parameter

 

The Start- End-, and Expert Routine all have the same method declaration:

 

class-methods PROCEDURE

   importing

    value(i_error_handling) type STRING

    value(inTab) type <<Class-Name>>=>TN_T_IN

   exporting

     value(outTab) type <<Class-Name>>=>TN_T_OUT

     value(errorTab) type <<Class-Name>>=>TN_T_ERROR .

 

Only the type definition of the structures TN_T_IN and TN_T_OUT are different between the routines.

 

In case of the start routine the inTab (TN_T_IN) and the outTab(TN_T_OUT) structure are identical and can be compared with the SOURCE_PACKAGE in the ABAP case. It is possible to adjust the structure of the inTabfor a start routine.

 

In case of the end routine the inTab(TN_T_IN) and the outTab(TN_T_OUT) structure are identical and can be compared with the RESULT_PACKAGE in the ABAP case. It is possible to adjust the structure of the inTabfor an end routine.

 

In case of the SAP HANA Expert Script routine the inTab(TN_T_IN) can be compared with the SOURCE_PACKAGEin the ABAP case and the outTab(TN_T_OUT) can be compared with the RESULT_PACKAGEin the ABAP case. The inTab always contains all fields from the source object and can not be adjusted.

 

4.2.2.4       Field-Routine Parameter

 

The procedure declaration is exactly the same as the declaration for the Start- End-, and Expert Routine:

 

class-methods PROCEDURE

   importing

     value(i_error_handling) type STRING

     value(inTab) type <<Class-Name>>=>TN_T_IN

   exporting

     value(outTab) type <<Class-Name>>=>TN_T_OUT

     value(errorTab) type <<Class-Name>>=>TN_T_ERROR.

 

The inTabcontains the source field(s) and in addition the columns RECORD and SQL__PROCEDURE__SOURCE__RECORD, see paragraph 4.2.2.1 »Field SQL__PROCEDURE__SOURCE__RECORD«.

 

Important difference to ABAP based field routines
A field routine in the ABAP context is called row by row. The routine only gets the values for the current processing line for the defined source fields. In the HANA context a field routine is called once per data package. And the importing parameter (inTab) contains all values of the source field columns, see Figure 4.2.

 

I’ll provide more information about the difference in processing between SQL Script and ABAP in paragraph 4.2.5 »Field-Routine«.

 

4.2.3       Flag - Enable Error Handling for Expert Script

 

For all SAP HANA Expert Scripts created with a release before BW 7.50 SP04 it is necessary to enable the error handling explicitly after the upgrade. To set the flag go to Edit => Enable Error Handling for Expert Script.

 

To ensure that the existing SAP HANA Expert Script implementations are still valid after the upgrade to BW 7.50 SP04 the method declaration will be left untouched during the upgrade.

 

After the upgrade to BW 7.50 SP04 all new created SQL Script routines will been prepared for the error handling and the flag, see Figure 4.6, will be set by default.

 

Figure_4_6.png

Figure 4.6: Enable Error Handling for Expert Script


4.2.4       Start-Routine

 

Typical ABAP use cases for a start routine are:

 

  • Delete rows from the source package that cannot filtered by the DTP
  • Prepare internal ABAP table for field routines


 

General filter recommendation
If possible, try to use the DTP filter instead of a start routine. Only use a start routine to filter data if the filter condition cannot be applied in the DTP. Also ABAP routines and BEx variable can be used in the DTP filter without preventing a push down, see blog »HANA based BW Transformation«.


Using a filter in a start routine is still a valid approach in the push down scenario to reduce unnecessary source data. Figure 4.7 shows the steps to create an AMDP based start routine to filter the inTab (source package) with a filter condition that cannot applied in the DTP filter settings.


Figure_4_7.png

Figure 4.7: AMDP Start Routine Sample

 

The second use case for an ABAP start routine is not a recommended practice in the context of a push down approach. Remember that the data in a push down scenario is not processed row by row. In a push down approach the data are processed in blocks. Therefore we do not recommend to use a field routine to read data from a local table like we do in ABAP. Further on the logic of an AMDP field routine differs from an ABAP based field routine, see paragraph 4.2.5 »Field-Routine«. The better way in a push down scenario is to read data from an external source using a JOIN in a routine



4.2.5       Field-Routine

 

A field routine can typically be used if the business requirement cannot be implemented with standard functionalities. For example if you want to read data from an additional source such as an external (non BW) table. Or you want to read data from a DataStore-Object but the source doesn’t provide the full key information which prevents the usage of the standard DSO read rule.

 

Figure 4.8 shows the necessary steps to create an AMDP Script based field routine. Keep in mind that AMDP based routines can only be created in the Eclipse based BW and ABAP tools. The first step to create an AMDP Script based field routine is the same as for an ABAP based field routine, see (1). If you select the rule type ROUTINE a popup dialog asks for the processing type, see (2). Choose AMDP script to create an AMDP script based field routine. The BW Transformation framework opens the Eclipse based ABAP class editor. For this an ABAP project is needed, see (3). Please note that the popup dialog sometimes opens in the background. The popup lists the assigned ABAP projects. If there is no project you can use the New… button to create one.  After selecting a project the AMDP class can be adjusted. Enter your code for the field routine in the body of the method PROCEDURE, see (4).

 

Figure_4_8.png

Figure 4.8: Steps to create an AMDP Script based field routine

 

The SQL script based field routine processing is different from the ABAP based routine. The ABAP based routine is processed row-by-row. The SQL script based routine on the other hand is only called once per data package, like all the other SQL script based routines.

 

Because of this all values of the adjusted source fields are available in the inTab. For all source field values the corresponding RECORD and the SQL_PROCEDURE_SOURCE_RECORD information are available in the inTab.

 

For SQL Script based field routines the following points need to be considered:

 

  • The target structure requires for each source value exactly one value. The outTabinTab
  • Sort order may not be changed. The result value must be on the same row number as the source value

 

In case of using a join operator pay attention that inner join operations could lead in a subset of rows.


4.2.6       End-Routine

 

Typical ABAP use cases for an end routine are post transformation activities, such as:

 

  • Delete rows which are obsolete after the data transformation, for example redundant rows
  • Line cross-data check
  • Discover values for a specific column based on the transfer result

 

From the development perspective the end routine is quite equal to the start routine. Only the time of execution differs. The start routine is before the transformation and the end routine afterwards.


4.3      Error Handling

 

In previous BW 7.50 SP’s switching on error handling in a DTP prevented a SAP HANA push down. As of BW 7.50 SP4 this is no longer the case and enabling error handling in a DTP will not prevent a SAP HANA push down.

 

A DTP with enabled error handling processes the following steps:

  1. Determine erroneous records
  2. Determine semantic assigned records to:
    1. The new erroneous records
    2. The erroneous records in the error stack
  3. Transfer the non-erroneous records

 

Therefore it is necessary to call the transformation twice. The first call will determine the erroneous records (1. Step) and the second call transfers the non-erroneous records (3. Step).

 

The error handling is only available for data flows with DataSource (RSDS), DataStore-Object classic (ODSO) or DataStore-Object advanced (ADSO) as source object and DataStore-Object classic (ODSO), DataStore-Object advanced (ADSO) or Open Hub-Destination (DEST) as target object.


4.3.1       Error handling background information

 

Here is some general background information about the technical runtime objects and the runtime behavior.

 

DTP with error handling and the associated CalcScenario
In the blog HANA based Transformation (deep dive) in paragraph 2.1 » Simple data flow (Non-Stacked Data Flow)« I’ve explained that the DTP in case of an non-stacked data flow reuses the CalculationScenario from the BW transformation. If the error handling is switched on in the DTP, the BW Transformation Framework must enhance the CalculationScenario for the error handling. Therefore it is necessary to create an own CalculationScenario for the DTP. The naming convention is the same as for a DTP for a stacked data flow. The unique identifier from the DTP is used and the DTP_ is replaced by TR_.

 

Difference regarding record collection between HANA and ABAP

The HANA processing mode collects all records from the corresponding semantic group in the current processing data package where the erroneous record belongs to and writes them to the error stack, see Figure 4.9. All records with the same semantic key in further processing data packages are also written to the error stack.

 

The ABAP processing mode writes only the erroneous record for the current processing package to the error stack. Further packages are handled in the same way as in the HANA processing mode. All records with the same semantic key are also written to the error stack.

 

4.3.1.1       Find the DTP related error stack

 

The related error stack (table / PSA) for a DTP can be found by the following SQL statement:

  SELECT "TABNAME"

    FROM"DD02V"

   WHERE"DDTEXT"like'%<<DTP>>%';


4.3.2       Error handling in a standard BW transformation

 

Fundamentally, the error handling in the execution mode SAP HANA behaves very similar to the execution mode ABAP. There are some minor topics to explain regarding the SAP HANA processing of artefacts like runtime and modelling objects.  In the next section we will also discuss differences in how the error handling is executed.

 

Figure 4.9 provides an example how error handling works in a BW transformation with standard transformation rules (meaning: no customer SQL script coding).

 

The business requirement for the BW transformation is to ensure that only data with valid customer (TK_CUST) is written in the target. Valid customer means, for the customer master data is available. This means the flag Referential Integrity is set for the TK_CUST transformation rule.

 

Semantic Groups

In the context of SAP HANA processing semantics groups are not supported!

 

The error handling defines an exception for this limitation. The semantic group in combination with the error handling is used to identify records that belong together. You cannot use the error handling functionality to work around the limitation and artificially build semantic groups for SAP HANA processing. That means the processing data packages are not grouped by the defined semantic groups. The data load process ignores the semantic groups.

 

The logic implemented in the sample data flow in Figure 4.9 writes only data for a Sales Document (DOC_NUMER) to the target if all Sales Document Items (S_ORD_ITEM) are valid. This means in case one item is not valid all items for the related Sales Document should be written to the error stack. Therefore I chose Sales Document (DOC_NUMER) as semantic group.

 

In the source is one record with an unknown customer (C4712) for DOC_NUMER = 4712 and S_ORD_ITEM = 10. The request monitor provides some information how many records are written to the error stack. The detail messages provides more information about the reason.


Figure_4_9.png

Figure 4.9: Error handling in a standard BW transformation

 

The initial erroneous record of a group is marked in the error stack. The erroneous data can be adjusted and repaired within the error stack, if possible. If the transaction data or master data are corrected the data can be loaded from the error stack into the data target by executing the Error-DTP.


4.3.3       Error handling and SQL Script routines

 

In case of using SQL script routines within a BW transformation the BW transformation framework sets a flag to identify which processing step 1st or 3rd is currently being processed.  Therefore all SQL script procedure (AMDP method) declarations will be enhanced, see paragraph 4.2.2 »Routine Parameter«. The following parameters are related to the error handling:

 

  • I_ERROR_HANDLING

The indicator is set to ‘TRUE’ when the BW transformation framework executes step 1, otherwise the indicator is set to ‘FALSE‘.

In this case the BW transformation framework expects only the erroneous records in the output parameter errorTab.

  • errorTab

The output table parameter can be used to handover erroneous records during the 1st call.

The error table structure provides two fields:

    • ERROR_TEXT
      • SQL__PROCEDURE__SOURCE__RECORD
    • SQL__PROCEDURE__SOURCE__RECORD

    The data input structure (inTab) is enhanced by the field SQL__PROCEDURE__SOURCE__RECORDfor all routines (Start-, Field-, End- and Expert routine).

    The data output structure (outTab) for the Start-, End- and Expert routine is also enhanced by the field SQL__PROCEDURE__SOURCE__RECORD.

    The field SQL__PROCEDURE__SOURCE__RECORDis be used to store the original record value from the persistent source object. For more information see paragraph 4.2.2.1 »Field SQL__PROCEDURE__SOURCE__RECORD«.


    Next I’ll explain the individual steps how a BW transformation is processed using an example. As mentioned before, the following steps are processed in case the error handling is used:

     

    1. Determine erroneous records
    2. Determine semantic assigned records to:
      1. The new erroneous records
      2. The erroneous records in the error stack
    3. Transfer the non-erroneous records

     

    Only step 1 and 3 must be considered in the SQL script implementation. Step 2 and the sub steps are processed internally from the BW transformation framework.

     

    Figure 4.10 provides an overview how the error handling will be processed. To illustrate the runtime behavior I keep the logic to identify the erroneous records quite simple. The source object contains document item data for two documents (TEST100 and TEST200) for the first one five and for the second one four items are available. The record TEST100, Document Item 50 contains a not valid customer C0815, see (1). To ensure that only document information are written into the target if all items for the document are valid I set the semantic key to document number (/BIC/TK_DOCNR), see (2). The procedure coding contains two parts. The first part supplies the data for the errorTab and the second part the data for the outTab, see (3). The procedure is called twice during the processing. The first call is to collect the erroneous records, see Figure 4.11. Based on the errorTab result the BW transformation framework determines the corresponding records regarding the errorTab and the semantic key. The collected records are written to the error stack see (4). The second procedure call is to determine the non-erroneous records. As a result the collected erroneous records will be removed from the inTab before the second call is executed, see Figure 4.12. The outTab result from the second call is written into the target object, see (5).


    Figure_4_10.png

    Figure 4.10: Error handling with SQL script

     

    Figure 4.11 shows the procedure from the sample above in the debug mode during the first call to determine the erroneous records. The parameter I_ERROR_HANDLING is set to TRUE, see (1). Only the first statement to fetch the result for the output parameter errorTab is relevant for this step, see (2). In my coding sample the second SELECT statement will also be executed but the result parameter outTab is not used by the caller. For simplicity I have kept the logic here as simple as possible, but note that from a performance perspective there are better options. The result from the SELECTstatement to detect the erroneous records is shown in (3). Based on the SQL_PROCEDURE__SOURCE_RECORD ID the BW transformation framework determines the corresponding semantic records from the source and writes them to the error stack.


    Figure_4_11.png

    Figure 4.11: Error handling with SQL script – Determinate erroneous records

     

    The next step is to transfer the non-erroneous records. The BW transformation framework calls the SQL script procedure a second time, see Figure 4.12. Now the parameter I_ERROR_HANDLINGis set to FALSE, see (1). From the coding perspective, see (2), only the second part to get the outTab result is relevant. The inTab , see (3), now contains only source data which can be transferred without producing erroneous result information.


    Figure_4_12.png

    Figure 4.12: Error handling with SQL script – Transfer the non-erroneous records

    What is #BW4HANA?

    $
    0
    0

    New YorkBW/4HANA is an evolution of BW that is completely optimised and tailored to HANA. The BW/4HANA code can only run on HANA as it is interwoven with HANA engines and libraries. The ABAP part is several million lines of code smaller compared to BW-on-HANA. It is free of any burden to stay, e.g., within a certain, "common denominator scope" of SQL, like SQL92 or OpenSQL, but can go for any optimal combination with what the HANA platform offers. The latter is especially important as it extends into the world of big data via HANA VORA, an asset that will be heavily used by BW/4HANA.

    So, what are BW/4HANA's major selling points? What are the "themes" or "goals" that will drive the evolution of BW/4HANA? Here they are:

     

    1. Simplicity

    Less-ObjecttypesDepending on how one counts, BW offers 10 to 15 different object types (building blocks like infocubes, multiproviders) to build a data warehouse. In BW/4HANA, there will be only 4 which are at least as expressive and powerful as the previous 15. BW/4HANA's building blocks are more versatile. Data models can now be built with less buildings blocks w/o compromising on expressiveness. They will, therefore, be easier to maintain, thus more flexible and less error-prone. Existing models can be enhanced, adjusted and, thus, be kept alive during a longer period that goes beyond an initial scope.

    Data-LifecycleAnother great asset of BW/4HANA is that it knows what type of data sits in which table. From that information it can automatically derive which data needs to sit in the hot store (memory) and which data can be put into the warm store (disk or non-volatile RAM) to yield a more economic usage of the underlying hardware. This is unique to BW/4HANA compared to handcrafted data warehouses that require also a handcrafted, i.e. specifically implemented data lifecycle management.

     

    2. Openness

    SQL-OpennessBW/4HANA - as BW - offers a managed approach to data warehousing. This means that prefabricated templates (building blocks) are offered for building a data warehouse in a standardised way. The latter provides huge opportunities to optimise the resulting models for HANA regarding performance, footprint, data lifecycle. In contrast to classic BW, it is possible to deviate from this standard approach wherever needed and appropriate. On one hand, BW/4HANA models and data can be exposed as HANA views that are can be accessed via standard SQL. BW/4HANA's security is thereby not compromised but part of those HANA views. On the other hand, any type of HANA table or view can be easily and directly incorporated into BW/4HANA. It is thereby not necessary to replicate data. Both capabilities mean that BW/4HANA combines with and complements any native SQL data warehousing approach. It can be regarded as a powerful suite of tools for architecting a data warehouse on HANA with all the options to combine with other SQL-based tools.

     

    3. Modern UIs

    QueryDesigner_Preview_3SAP Digital Boardroom BW/4HANA will offer modern UIs for data modeling, admin, monitoring that run in HANA Studio or a browser. In the midterm, SAPGUI will become obsolete in that respect. Similarly, SAP's Digital Boardroom, Business Objects Cloud, Lumira, Analysis for Office and Design Studio will be the perfect match as analytic clients on top of BW/4HANA.

     

    4. High Performance

    Big-DWExcellent performance has been at the heart of BW since the advent of HANA. As elaborated above, BW/4HANA will be free of any burdens and will leverage any optimal access to HANA which will be especially interesting in the context of big data scenarios as HANA VORA offers a highly optimised "bridge" between the worlds of HANA (RDBMS) and Hadoop/SPARK (distributed processing on a file system). Most customers require to enhance and complement existing data warehouses with scenarios that address categories of data that go beyond traditional business process triggered (OLTP) data, namely machine generated data (IoT) and human sourced information (social networks).

     

    The figure below summarises the most important selling points. It is also available as a slide.

     

    Major BW/4HANA selling points.

     

    This blog has been cross-published here and here. You can follow me on Twitter via @tfxz.

    BW4/HANA Roadmap Client Rumblings

    $
    0
    0

    With BW4/HANA being released a mere two days ago, reception has been mixed among clients.

     

    Essentially, the sentiment is that SAP has done an inadequate job discussing the release prior to the firm announcement on 31-AUG and those with a BW on HANA migration "in progress" who have already substantially completed the 7.4 migration (SBX, DEV, QA) feel blindsided with the release of something that they perceive to be "substantially better."

     

    What could we, as practitioners have done to better prepare our clients for this release? 

     

    Well, one could make the argument that we practitioners could have looked to the HANA Distinguished Engineers for some indication of this major release; though, there is nary a mention of BW4/HANA on the Distinguished Engineer blog.

     

    SAP product development should choose an envoy to make upcoming announcements like this public to practitioners who are entrusted with roadmapping multi-billion dollar clients who rule deca-million or centi-million dollar budgets.  With foresight, we can remain trusted advisors, with HANA the center of the ERP universe for years to come.

    "BW on HANA … Now What?" (Part 1 of 3)

    $
    0
    0

    Let’s get acquainted

    Let me first briefly introduce myself: Freek Keijzer, background in physics (laser spectroscopy, radioactivity, dangerous stuff), 20 years experience with SAP starting as a key user, 15 years experience as SAP Business Intelligence (BI) consultant. I also brew beer. This may not seem relevant, but I noticed at occasions like birthday parties that this impresses people way more than all of my SAP achievements put together.

     

    But enough about me, let’s talk about you. You recently migrated your SAP Business Warehouse (BW) system to a HANA database. You gladly accepted the factor of 3 improvement in loading and query performance resulting from moving to this database alone. But after getting used to this improvement you think: “Shouldn’t there be more to this investment?”. The good news is: There is!

     

    SAP is very proud of its HANA technology, and its marketing machine is running at full steam. Customers and implementation consultants – including me on occasion - do not always share SAP’s enthusiasm for new SAP products or technology. And believe me, if they were exaggerating I would tell you. That’s the kind of person I am. But they are not exaggerating. The HANA database technology with in-memory operation and column-based store is indeed revolutionary. First of all: the speed of some database actions. For someone who is used to pressing the button “Activate DSO data” just before lunch hoping that activation is finished after lunch, it comes as a surprise if this action after migrating to HANA takes like … no time at all. Instantaneous! No reason for lunch breaks any more. Well … other than eating. Some consequences of HANA database technology on data warehousing with a BW on HANA system are described in the next paragraphs.

     

    Transactions and reporting back together in one box

    In the old days, writing to and reading from row-store database tables simultaneously was a bad idea. “Runaway queries” were the cause of transaction system failures on numerous occasions. Dedicated systems arose to separate reporting output from transaction input. These systems are called “data warehouses” as they usually store a lot of data. Data is duplicated from the transaction system to the data warehouse, a process called “data loading”. Within the data warehouse, data is stored in multiple layers, typically 3 or 4, to prepare data for reporting. This implies more data duplication and more data loading.

     

    With column-store database technology, writing to and reading from tables no longer interfere with one another. This means that transactions and reporting can be brought back together again, in one (HANA) box. This is what SAP means with “Embedded BW”. Take a look at the picture below. This is what the future of SAP reporting could look like. ERP tables – in the future S/4HANA – are directly accessed via virtual data models built with either Native HANA or BW objects. I used dotted lines in the interior of the box to emphasize that it is only one box. You can build your own virtual models or use the standard ones delivered with the software. Standard data models with BW objects are called “Business Content” (usually not virtual). With Native HANA objects or HANA views the standard models are called “HANA Live”. In practice, mixtures of standard and customized objects are found in most data-integration platforms. “On top” you can use dashboard, cockpits and apps delivered with the software, or you can build your own with fancy frontend tools like Lumira, Design Studio, BO Cloud and the other tools that are part of the Business Objects product portfolio (or not; it is hard to keep track; I will come back to frontend tooling in part 3).

     

    Future of SAP Reporting.jpg

    Now that’s a good term: “data-integration platform”. Let’s use that instead of “data warehouse”, since we aim at storing as little data as possible. Storing data physically is nowadays called “persistency” and it should be an exception (... for historic data, “snapshots” and data from systems without direct access coupling; I will address this in part 3). Strive for max. one persistent layer for data from tables that are not already in the box, and no persistent layers at all for tables inside the box. Virtual layers yes, and with purposes quite comparable to the ones in a traditional data warehouse: raw data, data prepared for consumption by other data flows, data prepared for reporting.

     

    No more cubes! No more infoobjects? No more data??

    For data warehousing veterans it may come as a shock, but multidimensional models - commonly known as cubes - are no longer required. Cleverly designed cubes with properly chosen facts, dimensions and master data were once an absolute necessity for reporting with acceptable performance. Not anymore. Column-based database technology doesn’t need it, and doesn’t use it. Cubes are immediately flattened by the BW on HANA technology. This is tremendously good news for owners of a data warehouse filled with badly designed cubes. There is clearly no point in developing any new cubes after migration to the HANA database. In practice, this comes down removing a complete layer in the data warehouse. Usually, the top three layers are a table layer consisting of “Data Store Objects” (DSO), a reporting layer consisting of cubes and a virtual layer consisting of “multiproviders”. Most if not all data-transformations already took place in the data flow below the DSO. The virtual layer still has a purpose in decoupling the data model from the report to maximize flexibility towards future data model changes. But the cube layer has lost its purpose. Hence, no more cubes!

     

    Another darling of BW veterans that may become obsolete in the future: infoobjects. You may have become fond of these little rascals with theirs attributes and their texts and their nine character definition limit, but wait until you need to build a couple of hundreds of them for a big model on non-SAP data. I can’t say I will miss them much. SAP is probably thinking the same thing, as in the newest BW objects – “Open ODS Views” and “Composite Providers” – infoobject are not a necessity anymore. You can just work with the field names from the source system with no restrictions with

    field definitions being copied automatically. Some master data can be added later. Not as rich as infoobject functionality, but good enough for simple data models and PoCs. You can build “data flows” in BW using these new objects only, and build a report on top using the field names of the source system. No more infoobjects?

     

    Data flows between asterisks, as these is no physical data flowing inside the virtual data flows. But why is it so important to get rid of the data as much as possible? Not because of data duplication, as data storage is cheap. Why then?

     

    “The Horror of Data Loading”

    Compared with the popularity of SAP ERP – worldwide market leader since the Stone Age – the BW data warehousing solution is surprisingly unpopular. Reasons mentioned are lack of agility and bad performance, but in my view the main reason for BW’s unpopularity is lack of data integrity caused by “The Horror of Data Loading”. Please take a look at another picture shown below. It depicts a typical data flow in BW with five layers, four of which contain data: Acquisition (raw data), Propagation (prepared for consumption), BTL or Business Transformation Layer (business rules applied), and a cube for reporting.

     

    Data Loading Sucks.jpg

     

    Between the Propagation DSO and the BTL DSO, data is read from master data tables and from other DSOs. The load between the DSOs is in delta mode, so data missed may never be restored. Off course such dependencies can be built into the logic of a process chain. If a data load goes wrong for some reason, all dependent loads are stopped automatically. This works fine as long as the data warehouse is relatively simple and data loading is stable enough for the overall process chain to finish most of the time.

     

    But many successful data model implementations later, the data warehouse may have become so complex with dependencies so abundantly present, that the overall process chain rarely finishes on time with support consultants working over-hours to fix things manually to the best of their ability. Most of the time, their focus is on “getting the data into the cubes” and not on following the proper order according to the dependencies on the way to the cubes. This results in “data holes” in the cubes that are never fixed. To improve the likelihood of the process chain actually being finished by morning, some of the dependencies are “loosened”, thus creating more data holes. In complex data warehouse situations, one often sees initiatives to improve data loading stability. One such initiative was called “the swim lane”. Critical data flows were defined with dependencies firmly built into the

    process chain logic. Dependencies for non-critical data flows were loosened. Main result of this initiative was data quality outside of the swim lane deteriorating.

     

    Report users confronted with a “data hole” will notify the support desk. At least they will the first time. A support consultant will fix the issue by executing some sort of repair load. The report user may even report a second or third data hole he discovered. But at a certain point he or she will think: “Hey, it’s not my job to check data integrity on a daily basis! I may as well make downloads from the source system and do data-integration myself in Excel. At least I can be sure that the data I start with is correct.” And often that is exactly what happens. An apparently beautiful data warehouse, all glimmery and shiny, huge investment, and no one uses it because no one trusts the data.

     

    What to do? Simple: stop loading data! Go back to the very old days when a report on ERP data was always a program running in the ERP system collecting data from tables on the fly during query run-time. With abominable performance and pulling down the transaction system as a side-effect, but at least the data was always correct. The latest database technologies make it possible to go back to these very old days, but now with good performance and without side effects. And virtualization is key to this success.

     

    Follow-up

    This blog is the first in a series of three on the same topic. In the next part I will describe a stepwise approach to move towards the better SAP BI world as sketched above. The third part is intended for “miscellaneous topics”. I invite you to steer me towards any direction in the upcoming blogs by commenting on this one.

     

    The author works as Principal Consultant SAP Business Intelligence for Experis Ciber NL: http://www.ciber.nl/

    Why #BW4HANA ?

    $
    0
    0

    Benissa-2016-08With the recent announcement of BW/4HANA some questions arise on the motivation for a new product rather than evolving an existing one, namely BW-on-HANA. With this blog, we want to shed some light into the discussions we have had and why we think that this is the best way forward. Here are the fundamental 3 reasons:

    1. Classic DBs vs HANA Platform

    Nowadays, HANA has become much more than a pure, classic RDBMS that offers standard SQL processing on a new (in-memory) architecture. There is a number of specialized engines and libraries that allow to bring all sorts of processing capabilities close to where the data sits rather than the data to a processing layer such as SAP's classic application server. Predictive, geo-spatial, time-series, planning, statistical and other engines and libraries are all combined with SQL but go well beyond the traditional Open SQL scope that has been prevalent in SAP applications for almost 3 decades. Please recall that Open SQL constitutes the (least) common denominator between the classic RDBMS that have been supported in SAP applications. Long time ago, BW has broken with that approach a bit by introducing RDBMS specific classes and function groups that allowed to leverage specific SQL and optimizer capabilities of the underlying RDBMS. Still, the mandate has to be pushing BW's data processing more and more to where the data sits. Accommodating a "common denominator" notion (i.e. complying with "standard-ish SQL") impedes innovation at times as it stops adopting highly DW relevant and effective capabilities from HANA.

    2. Legacy Objects / Backward Compatibility

    BW has been originally architected around the properties and the cost models imposed by the classic RDBMS. Over the past decade, cautious re-architecting has allowed to continuously innovate BW and to safeguard the investments of the BW customers. There has been a strong emphasis on keeping newer versions of BW as compatible with past versions as possible. Similar to sticking to "standard-ish SQL" this impedes innovation in some areas. BW/4HANA breaks with this strict notion of backward compatibility and replaces it with tooling for conversions that might require user interaction here and there, thus some effort. However, this allows for removing some legacy not only inside a software product but also in existing DW instances that move from BW to BW/4HANA.

    Now, with some "baggage" removed it has become easier to focus on new, innovative things without being squeezed into considerations about backward compatibility in order to keep older scenarios going that you would build differently (e.g. with BW/4HANA's new object types) nowadays. So and in that sense, BW/4HANA is a much better breeding ground for innovations than BW-on-HANA can ever be. This is not because BW-on-HANA is a bad product but because it comes with a guarantee of supporting older stuff too which BW/4HANA does not.

    3. Guidance

    Finally, and that is basically the result of 1. and 2., many of our partners and customers have asked us for guidance about which of the many options BW provides they should use for their implementation. Some of those options are there simply because they got introduced some time ago but would be actually obsolete within a new product. So, SAP has decided to reduce the complexity of choices and created a product, namely BW/4HANA, that offers only those building blocks that customers and partners should use now and in the future. The product has become simple and that will translate into simplified DW architectures.

    --

    I hope this blog helps you to understand why SAP has moved from BW to BW/4HANA. In simple terms, it's similar to choosing between renovating and rearchtecting your existing house or building and moving to a new house with the latter fitting your furniture and all the other stuff that you cherish. We all hope that you will feel comfortable in the new home.

     

    This blog has also been published here. You can follow me on Twitter via @tfxz.

     

    PS: More details are revealed on Sep 7's SAP and Amazon Web Services Special Event.

    Viewing all 130 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>