Quantcast
Channel: SCN : Blog List - SAP BW Powered by SAP HANA
Viewing all 130 articles
Browse latest View live

Table size and BW object size monitoring in BWoH made easy - Part 1

$
0
0

For a tool which is known to give great insight in a business environment it is ironic that it is lacking information of one of its own key performance indicators. SAP BW has been a very successful platform for BI over the last 20 years or so. Almost from day one people have asked the question ‘which areas use the most space on my database’? This question has become increasingly important in recent years when customers are considering whether they are willing to pay a premium price for a HANA database.

The reporting abilities on database usage were first introduced five years ago as part of the technical content reporting in BW 7.3. There are still some flaws with the database space usage reporting:

  • Setting up the technical content reporting is fiddly and needs looking after
  • It takes an expert to interpret the results and turn it into easy to understand reporting
  • It is incomplete, only showing the sizes of BW objects and not of any other objects of significant size which might be sitting in the same BW database (Basis tables or other).

The last point is the most important point. Below is an example from a production system (BWoH) which filled up much faster than was anticipated. The blue bars are part of standard business content reporting. The yellow bar is missing in the standard reports. If you think it is important to see this, keep reading and you find out how to get an instant, complete overview of database space usage with minimal effort.

 

Size monitoring figure 1.png

Figure 1: BWoH filling up more quickly than expected? Make sure to keep an eye on all objects and tables.

 

Most organisations running BWoH do not have the right tools to monitor HANA database usage, manage the database size effectively and plan for future growth in a cost effective way. In this blog I describe a first step in getting better insight in database space usage. This solution is based on a script which you can run instantly, without having to configure, develop or customize anything. Running this script will give insight in the ‘as is’ situation. If you want to effectively plan for future growth then you will need some means of getting insight in historic growth as well. In my next blog I will describe how you can build out the solution with minimum development effort to a full blown database size monitoring application which allows for trend reporting.

The result of running this ad-hoc SQL should look something like this and will be available at HANA speed: Within seconds and without any system development!

 

Size monitoring figure 2.png

Figure 2: Output of running the ad-hoc script. From here it is a small step to clear insight and cool visualisations

 

This data can then be used to create some visualisations which provide great insight. Below are a few examples:

Size monitoring figure 3.png

Figure 3a: Only 6 tables take up 55% of the total used database space

Figure 3b: Surprisingly, Cubes (in blue) don't take up much space. DSO's (Orange) do, as does Master Data (Grey).

Figure 3c: There seems to be something wrong with housekeeping on Basis tables

 

The solution (part 1): A SQL script

Just like any other database, HANA stores all the metadata about tables in its own dictionary tables. You can find out the size of, say, an active table of a DSO or a fact table in a cube if you know which dictionary tables to look at. You have to have some understanding about table partitioning, row and column stores, memory usage and disk space to interpret the results. Luckily SAP is helping you because HANA comes with extremely useful database views on top of the dictionary tables which makes it easier to get a complete overview of table sizes. When you understand these views, you have half the job done.

The other half is understanding how BW objects relate to tables in the HANA database. BW objects use several tables and to understand how database space is used it is important to evaluate BW objects rather than individual tables.

The script in this blog combines the different dictionary views to get a complete view of all tables and then applies some further logic to put the tables in their BW context. Together, this results in a complete breakdown of database space size usage by BW object.

 

Step 1: The dictionary views

The four views below are used to get an overview of memory usage, disk allocation and other storage parameters for each table in HANA. These are all fully documented on help.sap.com

(Technology Platform > SAP HANA Platform > SAP HANA Platform Core > SAP HANA SQL and System views reference > System Views Reference)



 

Table NameDescription
TABLESAll available tables
M_CS_TABLESRuntime properties of column store tables
M_RS_TABLESRuntime properties of row store tables
M_TABLE_PERSISTENCE_STATISTICSPersistence (file) storage statistics for all tables



Note: I am pretty sure that I don’t actually need to use ‘TABLES’ in my script and the reason it is still there is just because I don’t want to break my code (don’t fix it if it ain’t broken). Please feel free to share your cleaner version of the SQL in the comment section of this blog.


The information I use from these views are just the basic storage parameters: Size of memory in Total (main, delta and history); Size of memory in Delta; Disk Size and record count.

The SQL for is quite straight forward. The only thing to bear in mind is that tables can be partitioned, so you have to use a ‘group by’ and aggregate the statistics.

 

Step 2: The ‘BW’ groupings

Somewhere deep down in BW there might be a table or a number of tables where the relationship between a table name and an object type (Cube, DSO, InfoObject, etceterar) can be found. If such table exists, I have not been able to find it. Instead, I use the well documented naming convention BW uses for its tables.

The code I have posted here is complete for the systems where I have used it. It has the /BIC/A* for DSO’s, /BIC/F* for cubes and a lot more. If you happen to use a system which uses different components (for example BPC) then you might have to add some elements.

Unfortunately, PSA and Change Log tables are defined with the same prefix: /BIC/B*. To distinguish between the two, I look for specific strings in the description. This makes it a bit more complex, the coding may vary if it there is bespoke development in different languages and ultimately there is a risk that a table is not correctly classified. It then ends up in the list as a PSA table instead of a Change Log table or the other way around, which I believe is just a minor inconvenience.

 

Structure of the code

I have tried to keep the ad-hoc SQL script simple and as a result it is a bit longer than strictly necessary. There is a bit of complexity in interrogating the table names to derive the object type, but apart from that the code is quite easy to read. The price you pay for this is that the code comes in three parts, linked together through a ‘union’. The way the code is cut up is as follows:

 

Union part 1: BW Objects – assumed to always be column store tables

Union part 2: Non-BW Objects, column store tables

Union part 3: Non-BW Objects, row store tables

 

There is duplication of code, but the coding is a lot simpler to understand compared to a solution (or at least my solution) where everything is brought together in one statement.

 

Step 3: Run the code

I promised you a solution which would not require any development but there is a small tweak you will need to make to the code that I provide in this post, unless your BW system happens to sit in schema SAPNW1. Just do a “find and replace” of “SAPNW1” with whatever schema name of your BWoH system and you can run the code.

There is one more caveat though. For the ad-hoc solution I have to join the HANA dictionary tables with a BW text table. The text in this table is language dependent. If you’re lucky your system has all table descriptions available in a single language. If not, you’re in trouble. You can only select a single language if you are sure you are not excluding a relevant table which is not maintained in that language. I usually download the resultset to Excel and create a pivot table on language and table count to see if I can select a specific language.

A better solution might be to use outer joins. I haven’t tried this yet but I might give that a try if I can find the time to do so.

 

As I mentioned before, I have also created a monitoring application where you can keep track of table growth over time. In this solution I split master data from transactional data so I don’t have the problem around duplicate lines and missing lines in my transaction data (as a result of joining with a text table). I hope to post the details of how to build this monitoring application shortly in a follow-up blog.

Until then, I hope you get some real insight in the database usage of your BWoH system using this ad-hoc solution. Remember: HANA is a premium product so use it efficiently.

 

 

Finally: The SQL Statement

 

(SELECT top 100

-- 1. BW objects, always column store tables

MAX(CASEsubstring(T.TABLE_NAME,6,1)

WHEN'A'THEN'DSO'

WHEN'B'THEN (CASEsubstring(DDTEXT,1,3) WHEN'PSA'THEN'PSA'ELSE'C-LOG'END)

WHEN'D'THEN'CUBE'

WHEN'E'THEN'CUBE'

WHEN'F'THEN'CUBE'

WHEN'P'THEN'IOBJ'

WHEN'Q'THEN'IOBJ'

WHEN'S'THEN'IOBJ'     

WHEN'T'THEN'IOBJ'

WHEN'X'THEN'IOBJ'

             ELSE'OTHER'

             END) AS BW_TYPE,

MAX (CASEsubstring(T.TABLE_NAME,6,1)

WHEN'A'THENsubstring(T.TABLE_NAME,7,length(T.TABLE_NAME)-8)

WHEN'B'THEN (CASEsubstring(DDTEXT,1,3) WHEN'PSA'THENREPLACE(SUBSTR_AFTER(DDTEXT, 'PSA for '),' Segment 0001','')

                       ELSE (CASEsubstring(DDTEXT,1,8) WHEN'Transfer'THEN SUBSTR_AFTER(DDTEXT, 'Application ')

                             ELSE SUBSTR_AFTER(DDTEXT, 'Object ') END ) END)

WHEN'D'THENsubstring(T.TABLE_NAME,7,9)

WHEN'E'THENsubstring(T.TABLE_NAME,7,9)

WHEN'F'THENsubstring(T.TABLE_NAME,7,9)

WHEN'P'THENsubstring(T.TABLE_NAME,7,9)

WHEN'Q'THENsubstring(T.TABLE_NAME,7,9)

WHEN'S'THENsubstring(T.TABLE_NAME,7,9)       

WHEN'T'THENsubstring(T.TABLE_NAME,7,9)

WHEN'X'THENsubstring(T.TABLE_NAME,7,9)

             ELSE'OTHER'

             END) AS BW_OBJECT,

T.TABLE_NAME,

D.DDTEXT AS DESCRIPTION,

SUM(ROUND(C.MEMORY_SIZE_IN_TOTAL/1024/1024,0)) AS TOTAL_SIZE_IN_MB,

SUM(ROUND(C.MEMORY_SIZE_IN_DELTA/1024/1024,0)) AS DELTA_SIZE_IN_MB,

SUM(ROUND(P.DISK_SIZE/1024/1024,0)) AS DISK_SIZE_IN_MB,

          T.TABLE_TYPE,

SUM(RECORD_COUNT) AS RECORD_COUNT,

D.DDLANGUAGE

FROM TABLES T

JOIN M_CS_TABLES C ON T.TABLE_NAME=C.TABLE_NAME

JOIN M_TABLE_PERSISTENCE_STATISTICS P ON T.TABLE_NAME=P.TABLE_NAME

JOIN SAPNW1.DD02T D ON T.TABLE_NAME=D.TABNAME

WHERE T.SCHEMA_NAME = 'SAPNW1'

AND (  T.TABLE_NAME LIKE'/BI%/D%'

             OR T.TABLE_NAME LIKE'/BI%/E%'

             OR T.TABLE_NAME LIKE'/BI%/E%'          

             OR T.TABLE_NAME LIKE'/BI%/B%'

             OR T.TABLE_NAME LIKE'/BI%/A%'

             OR T.TABLE_NAME LIKE'/BI%/P%'

             OR T.TABLE_NAME LIKE'/BI%/Q%'

              OR T.TABLE_NAME LIKE'/BI%/S%'

             OR T.TABLE_NAME LIKE'/BI%/T%'

             OR T.TABLE_NAME LIKE'/BI%/X%'      

             OR T.TABLE_NAME LIKE'/BI%/1%'

             OR T.TABLE_NAME LIKE'/BI%/2%'                              

             OR T.TABLE_NAME LIKE'/BI%/3%' )

GROUPBY T.TABLE_NAME,

         D.DDTEXT,

         T.TABLE_TYPE,

         D.DDLANGUAGE

ORDERBYSUM(MEMORY_SIZE_IN_TOTAL) DESC)

UNIONALL

(SELECT top 100

-- 2. Basis tables, column store tables

NULL,

NULL,

T.TABLE_NAME,

D.DDTEXT,

SUM(ROUND(C.MEMORY_SIZE_IN_TOTAL/1024/1024,0)),

SUM(ROUND(C.MEMORY_SIZE_IN_DELTA/1024/1024,0)),

SUM(ROUND(P.DISK_SIZE/1024/1024,0)),

          T.TABLE_TYPE,

SUM(RECORD_COUNT),

D.DDLANGUAGE

FROM TABLES T

JOIN M_CS_TABLES C ON T.TABLE_NAME=C.TABLE_NAME

JOIN M_TABLE_PERSISTENCE_STATISTICS P ON T.TABLE_NAME=P.TABLE_NAME

JOIN SAPNW1.DD02T D ON T.TABLE_NAME=D.TABNAME

WHERE T.SCHEMA_NAME = 'SAPNW1'

AND (  T.TABLE_NAME NOTLIKE'/BI%/D%'

             AND T.TABLE_NAME NOTLIKE'/BI%/E%'

             AND T.TABLE_NAME NOTLIKE'/BI%/F%'

             AND T.TABLE_NAME NOTLIKE'/BI%/B%'

             AND T.TABLE_NAME NOTLIKE'/BI%/A%'

             AND T.TABLE_NAME NOTLIKE'/BI%/P%'

             AND T.TABLE_NAME NOTLIKE'/BI%/Q%'

             AND T.TABLE_NAME NOTLIKE'/BI%/S%'

             AND T.TABLE_NAME NOTLIKE'/BI%/T%'

             AND T.TABLE_NAME NOTLIKE'/BI%/X%'      

             AND T.TABLE_NAME NOTLIKE'/BI%/1%'

             AND T.TABLE_NAME NOTLIKE'/BI%/2%'                              

             AND T.TABLE_NAME NOTLIKE'/BI%/3%')

GROUPBY T.TABLE_NAME,

          D.DDTEXT,

          T.TABLE_TYPE,

          D.DDLANGUAGE

ORDERBYSUM(MEMORY_SIZE_IN_TOTAL) DESC

)

UNIONALL

(SELECT top 100

-- 3. Basis tables, row store tables

NULL,

NULL,

T.TABLE_NAME,

D.DDTEXT,

SUM(ROUND((R.USED_FIXED_PART_SIZE + USED_VARIABLE_PART_SIZE)/1024/1024,0)),

NULL,

SUM(ROUND(P.DISK_SIZE/1024/1024,0)),

          T.TABLE_TYPE,

SUM(RECORD_COUNT),

D.DDLANGUAGE

FROM TABLES T

JOIN M_RS_TABLES R ON T.TABLE_NAME=R.TABLE_NAME

JOIN M_TABLE_PERSISTENCE_STATISTICS P ON T.TABLE_NAME=P.TABLE_NAME

JOIN SAPNW1.DD02T D ON T.TABLE_NAME=D.TABNAME

WHERE T.SCHEMA_NAME = 'SAPNW1'

AND (  T.TABLE_NAME NOTLIKE'/BI%/D%'

             AND T.TABLE_NAME NOTLIKE'/BI%/E%'

             AND T.TABLE_NAME NOTLIKE'/BI%/F%'

             AND T.TABLE_NAME NOTLIKE'/BI%/B%'

             AND T.TABLE_NAME NOTLIKE'/BI%/A%'

             AND T.TABLE_NAME NOTLIKE'/BI%/P%'

             AND T.TABLE_NAME NOTLIKE'/BI%/Q%'

             AND T.TABLE_NAME NOTLIKE'/BI%/S%'

             AND T.TABLE_NAME NOTLIKE'/BI%/T%'

             AND T.TABLE_NAME NOTLIKE'/BI%/X%'      

             AND T.TABLE_NAME NOTLIKE'/BI%/1%'

             AND T.TABLE_NAME NOTLIKE'/BI%/2%'                              

             AND T.TABLE_NAME NOTLIKE'/BI%/3%')

GROUPBY T.TABLE_NAME,

          D.DDTEXT,

          T.TABLE_TYPE,

          D.DDLANGUAGE

ORDERBYSUM(ROUND((R.USED_FIXED_PART_SIZE + USED_VARIABLE_PART_SIZE)/1024/1024,0)) DESC

)

 




Viewing an info area in the Project Explorer tree / Finding an info area in BW Modeling Tools

$
0
0

Step (1) Select “Ctrl + H” or select the “Search” icon as shown in the tool bar (below picture) to invoke the “Search” dialog

 

zza.png

 

 

Step (2) Select the first tab “BW Object Search” (if this is not selected by default)

 

 

Step (3) In the “Search String”, enter the name of the info area that you want to see

 

 

Step (4) Select “AREA[InfoArea]” from the object type as seen below

(If needed we can set the “Created” / “Last change” to further refine the search)

 

 

zzb.png

 

 

Step (5) Hit the “Search” button in the “Search” dialog

 

 

Step (6) In the “Search” view”, the list of info areas which has our search string “AK_TEST” will be displayed

 

 

zzc.png

 

 

 

Step (7) We can directly open this info area by double clicking on it – and the info area opens in the Editor (in the info area editor, the “Technical Name” and “Description” are seen)

 

 

Step (8) Further, from the project explorer view, we can select “Link with Editor” and have the info area seen in the BW project explorer tree (as seen below)

 

zzd.png

SAP HANA – SQL Execution Performance

$
0
0

BW can effortlessly expose data for external consumption e.g. by a third party tool, using the checkbox “External SAP HANA View”. Those automatically generated Calculation Views which are based on Composite Providers or even BW Queries are usually quite complex, so SQL access will probably perform poorly. This blog is about two parameters which can greatly enhance runtime performance for certain scenarios, tested with BW 7.4 SP10 on HANA Revisions 85.02, 97.02 and 102.02.

 

1. Parameter “RS2HANA_FORCE_SQL”

Up until HANA SPS8, SQL statements were executed using the Column Engine by default. Starting with HANA SPS9 this behavior changed, now the SQL Engine is used whenever possible. It involves a lot more optimization rules e.g. join reordering determination, so in theory this might be beneficial. However, it’s only efficient if every graphical model node can be translated into SQL. What’s more, the SQL engine is still immature and currently has some bugs, which in some cases results in very bad runtimes. As a workaround, the SQL engine can be disabled.

 

How to set locally:

Open a View in HANA Studio, change “Execute In” to “ “, save and redeploy. The setting will apply for that single View only.

 

How to set globally:

Add object RS2HANA_FORCE_SQL with value “ ” into the table RS2HANA_VIEW_SET.

 

The table is buffered, so direct change with DB tools will only be effective after a buffer reset on the Application Servers. For that reason, it is recommended to customize this table through the appropriate SET method, see SAP Note 2252122.

 

For the setting to take effect, it is required to reactivate all the Views by using the button “repair external SAP HANA View” in transaction RS2HANA_ADMIN.

 

Note: it does make a difference whether a View is activated from within ABAP or within HANA Studio. ABAP directly inserts View information into the Generation XML. It is possible that there are some features implemented which HANA Studio doesn’t support yet, so potentially functionality could get lost. Therefore, it is advisable not to interfere with HANA Studio when the parameter RS2HANA_FORCE_SQL is set.

 

How to check:

Via HANA Studio -> Catalog -> “Find Table” the Create Statement of a View can easily be reviewed. If the SQL Engine is enabled, there will be an additional “WITH PARAMETERS (FLAGS=’1024’)” at the end of the statement.

 

If not present, the Column Engine is active instead.

 

Side effects:

For HANA revisions up to SPS8 the NULL handling in expressions differs between the two engines. If switched, there might be unexpected results. Example: In the column engine the expression 2 + null results in 2, whereas in SQL engine it results in null, see SAP Note 1857202. Since HANA SPS9, the NULL handling is identical for both engines, so there won’t be side effects here.

 

2. Parameter “EXTRA_VIEW_ATTRIBUTES_MODE”

Some data models might require another abstraction layer, so there are probably SQL Views built on top of Calculation Views, see screenshot below.

 

In this example, there is a SQL View (“TESTVIEW”) with 10 fields which are derived from an underlying BW Query View (“TESTQUERY01”). What happens, if a SQL statement is executed, which selects just 3 fields from the SQL View?

 

Common sense expectation would be that only the 3 requested fields are passed down and read from the original Calculation View, which indeed is true for HANA revisions up to SPS8. However, starting with HANA SPS9 redundant attributes in the projection list of the SQL view are no longer reduced automatically. So for the scenario above, all 10 fields would be internally materialized at first, only to sort out the invisible ones afterwards. While the ultimate output is identical, this calculation logic is inefficient and runtimes are potentially many times higher.

 

Fortunately, there is a parameter which tells the optimizer to behave like in earlier revisions, that is to ignore possible differences in the result and never materialize invisible fields. While that is not standard compliant anymore, it is safe to use with BW and should drastically increase performance.

 

How to set locally:

In HANA Studio SQL console apply the following additional parameter at the end of each select statement:

WITH PARAMETERS (PLACEHOLDER = ('$$CE_INTERNAL_EXTRA_VIEW_ATTRIBUTES_MODE$$','2'));

 

How to set globally:

In HANA Studio open the SQL console and execute the statement below once:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'System' ) SET ('calcengine', 'extra_view_attributes_mode') = '2' WITH RECONFIGURE;

 

To unset the global parameter just execute:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'System' ) UNSET ('calcengine', 'extra_view_attributes_mode') WITH RECONFIGURE;

BW Obsolete Object Types and Substitue ( SAP BW 7.4 )

$
0
0

I have come to know that many BW objects are not available in BW anymore while working on BW 7.4. There are substitute for each obsolete objects as mentioned below.

 

 

Obsolete ObjectSubstitute in BW 7.4
InfospokeOpen Hub Destination
Update RulesTransformation
Transfer RuleTransformation
3.x DataSource7.x DataSource
3.x data archiving7.x data archiving
CompositeProvider in Workbench (RSA1)CompositeProvider in HANA Stuido
Virtual Provider on 3.x InfosourceCompositeProvider

 

It is recommended to use new objects as they provide more functions and better usability.By default, all these obsolete function are hidden. That means mostly 3.x functions in the context menu on the object trees in RSA1 and in other transaction are not available.

 

However, you can change this setting using tcode RSCUSTV32

 

obsolete.png

A Speedy Look into HANA2016 (Feb 16-19)

$
0
0

SAP Insider HANA 2016 is right around the corner. This will be my first time to this exciting event and I am looking forward to it. This conference is co-located with BI 2016, IoT2016 and Admin and Infrastructure 2016 events which means that you have access to a vast range of content and experts, all in one conference pass. If you organization is running SAP HANA or you are considering it, I am sure you will find a lot of value in attending this event.

HANA2016.gif

HANA2016 combines a variety of session formats for education, like panel discussions, presentation sessions and pre-conference workshops for gaining a full spectrum of information on SAP HANA – Migrations, BW on HANA, HANA Cloud, UX, Data Management and Analytics, to name a few.

 

SAP Mentor and ASUG BI lead Tammy Powlas has a great line up of customer sessions already marked on her agenda. See here –

 

http://scn.sap.com/community/business-intelligence/blog/2016/01/16/bi-2016-conference-helps-navigate-the-future-and-win-at-the-wynn

 

I also want to share some information on the session that I will be presenting:


Tools, tips, and strategies to optimize BEx query performance for SAP HANA
Friday, February 19, 2016
8:30 AM - 9:45 AM

 

Join me to get practical tips on how to optimize the performance of your existing BEx queries and guidance to design SAP HANA optimized queries. I will also share how to evaluate query performance and tune BEx queries to run better on SAP HANA. Get insights on the new query design tool available in SAP BW 7.4, and get best practices for designing SAP HANA optimized queries. I will also cover the SAP HANA analytic process and describe how to leverage the analytic manager to enable advanced analytic capabilities.

 

Along with great educational content, networking is also a crucial aspect of these events. Making new connections brings a lot of value when you can share experiences and your point of view. Join me in the “Ask the Experts” session where speakers will be available to talk 1x1 about your specific issues or questions. I also look forward to mingling with the attendees during the evening reception and speed networking sessions.

 

All in all, hoping for a very productive event. See you there!

Expose CDS Views to SAP BW Bex Query Designer

$
0
0

Hello All,

 

This is the 2nd Part of my attempt, to bring to you some of the additions that has happened to the BW world, post the launch for S/4 HANA.


You can find Part 1 Here --> Solution for 0ANALYSIS Issue while trying to access ‘Open in Design Studio’ option of S/4HANA CDS Views



By now, you might be aware that in S/4 HANA we have a very important concept called CDS.

I don’t want to explain the complete history/details of CDS views here, as it’s been explained in detail elsewhere.


In this blog, I would like to introduce an approach by which you can consume CDS Views in BW Bex Query Designer.


The following figure shows the Fiori Launchpad of a S/4 HANA System.

1.png

 

Once you open the Query Browser, you will find the available CDS Views.

Here we are talking exclusively about the View C_DAYSSALESOUTSTANDING.

 

2.png

 

3.png

 

SAP Says:

All S/4HANA CDS views are automatically exposed as a ODP transient InfoProvider and can be used in the BEx Query Designer to define custom queries.

(Inherited from SAP Standard Document)



Use Cases:

To leverage BEx capabilities and valid for embedded BW setup

 

Capabilities:

All S/4HANA Analytics Open CDS views are automatically exposed as InfoProvider in BEx Query Designer Supports BEx variables

Supports restricted key figures

Supports exceptions and conditions

Supports currency conversion

Supports BW Report-to-Report interface

Supports S/4HANA Analytics privileges

 

Not supported (Query not based on BW InfoObjects):

BW Hierarchies, node variables

BW analysis authorizations

 

PS:Please note that the above points were derived for SAP document on SAP S/4HANA Analytics & SAP BW Data Integration.

 

Now, we will log into a BW Query Designer of the embedded S/4 HANA System and identify the required system.

 

4.png

 

5.png

 

Provide the Credentials:

6.png

 

Open a New one:

7.png

 

8.png

For ease of identification: Search with 2C* as transient Provider starts with that naming convention.

 

9.png

 

Open the required one, In our case it is Days Sales Outstanding Smart Business App.

Now play around with the Query and design a  custom one,  according to your requirement.

10.png

 

Hope this gives you an idea of how we can expose a CDS View in our Query Designer.



BR

Prabhith

Mapping of InfoProvider fields in CompositeProvider

$
0
0

When creating CompositeProviders in the new BW modeling tools in Eclipse it may become difficult to get a good overview of which InfoObjects from which InfoProviders are mapped in the Union or Join part, especially when your CompositeProvider contains many objects.

 

For MultiProviders we can use table RSDICMULTIIOBJ to get an overview of the mapping between source and target fields. However, since the definition of a CompositeProvider is stored as XML such a table does not exist for CompositeProviders. Although the XML of a CompositeProvider can be displayed in RSA1, ít's difficult to read, even when you export the XML definition to Excel.

 

By making use of the attached program, it's possible to display the mapping between InfoProvider fields and the CompositeProvider on your screen.

 

Example CompositeProvider

For this example I created a simple CompositeProvider containting two Advanced Datastore Objects;

 


WCP__001.png

 

The CompositeProvider contains the InfoObjects 0DOC_NUMBER and 0CALDAY from ADSO 01 as well as the InfoObjects 0DOC_NUMBER, 0DOC_ITEM, 0AMOUNT and 0CURRENCY from ADSO 02. The InfoProviders are joined on 0DOC_NUMBER.

 

The XML definition that has been created when activating the CompositeProvider, is stored in table RSOHCPR, field XML_UI:

 

rsohcpr.png

 

This XML-string is displayed in the 'hierachical view' when you use the 'Dispay XML' button in RSA1 (partially shown below):

 

XML.png

In this definition the <entity> part contains the name of the involved InfoProviders, whereas the <mapping xsi:> part contains the mapping between the source and target fields.

 

In order to 'breakdown' this hierachical view we need to put the XML definition into an internal table. One way to this, is to make use of the the function module SMUM_XML_PARSE, a guide with more information on this function can be found here.

 

This function parses the XML definition to and internal table containing 4 columns:

HIER - the hierarchy 'node'

TYPE - A for 'header' and 'V' for value nodes

CNAME - the name of a node

CVALUE - the value of a node

 

lt_1.png

 

To display the InfoProvider name and the source and target fields on our screen, we need to collect these values from the internal table into an output table. As seen above, the <element> part contains the name of the InfoProvider. So we check the column CNAME for the value 'entity' and get the tecnhical name in column CVALUE (minus the addition '.composite'.).

 

After that we get the values of the source and target fields by checking CNAME for 'sourceName' and 'targetName':

 

lt_2.png

 

When all values have been checked, the values are displayed by making use of the SAP class cl_salv_table:

 

screen.png

 

output.png

Note: the XML definition also contains information about the type of join and the properties of the CompositerProvider, which could also be added to the program output in order to create a 'documentation program'.

#BWonHANA's New Query Designer

$
0
0

Initial and lightweight versions have been shipped with BW-on-HANA 7.4 but, nowadays and especially with BW 7.5, the new BW Query Designer (BW QD) tool has reached full maturity. This blog summarises the most important facts that you should be aware of. All BW-on-HANA customers are recommended to consider the BW QD as the default choice for query design.

  • It is based on HANA Studio and therefore Eclipse,
  • it is a member of BW's modeling tools (BW-MT),
  • it is built on the same paradigms as the (native) HANA modeling tools to allow for a seamless and harmonised transition between the two worlds.

Due to the strong relationship with HANA Studio, the BW QD is only available with BW-on-HANA.

 

Installation

The BW QD is part of the BW Modeling Tools in Eclipse (BW-MT) with the latter being tied to HANA Studio. On this SCN page, you find almost everything that you require to get started. In particular, there are two videos (part I and part II) that describe how to install the BW-MT from the SAP Service Market Place. In those videos, an older version of the BW-MT and Eclipse are used. Simply adjust accordingly and use the most recent versions.

 

The Relationship with the BEX Query Designer

With BW 7.4 SP13 and BW 7.5, the new BW QD not only holds the functionality offered by the old BEX QD but goes beyond:

 


Figure 1: Data preview in BW's new Query Designer. Click to enlarge.


Figure 2: How to expose a BW query as a SQL accessible HANA view.


Figure 3: Creating a BW variable of processing type SAP HANA Exit.

 

The old BEX QD still works and is the tool of choice for non-HANA based BW installations. For BW-on-HANA installations, however, it is highly recommended to use the new BW QD due to the many integration options with standard (native) HANA like SQL access or modeling.

 

Getting Started

This SCN page holds a bunch of useful information on the BW-MT and the new BW QD. I highly recommend to go through the 2 short installation videos (4 min + 7 min) mentioned above. Afterwards, have a look at the video showing an example of defining a BW query via the new BW QD (8 min).

 

This blog is also available here. You can follow me on Twitter via @tfxz.


Data modeling approach in HANA Studio using SLT

$
0
0

Current Situation:

Extracting the data from base ECC tables to HANA (BW on HANA ) requires  data modeling effort  in ECC( R/3) , BW and in HANA Studios. In  first step  in R/3 system we need to create Data sources ( Generic Data sources) based on ECC tables ,  In 2nd step  in BW we need to create BW data sources and data flow in BW . We also need to load the data in BW daily/weekly etc.. At last we need to create HANA views in HANA studio to consume the ECC table data . All these steps require lot of involvement in BW data modeling , data loading efforts. Sometime Complex  ABAP code may also need to be written

 

Improvement idea: Use the SLT in SAP HANA Studio specifically for these scenario. Here without  BW we can extract data from ECC tables to HANA. Even no data modeling effort is required in R/3( ECC system) . Directly all ECC tables will be accessed from HANA Studio. there are 2 ways to extraction

 

Case 1)  If Simply all the data from tables  need to extract ( as it is in R/3)  Case  2 ) on different tables  we need to create generic data sources, there may be customized complex business logic need to apply while extracting from multiple tables.

 

How to implement:

For Case1

1)  ECC tables directly tables  will be replicated along with all data . There is feature in SLT which will be  triggered automatically in HANA and update the tables in case of data changes in ECC tables. Data will be replicated to HANA  instantly.  Once these tables are available in HANA we can consume them easily with HANA views

 

in case 2 :  when multiple tables are required on complex business logic we need to write Stored Procedure in HANA  and then directly use the table data without depending in  BW  and ABAP

 

Benefit: 1) Real time reporting is much easier and very faster without BW . Data modeling efforts in ECC , BW is fully bypassed , Everything can be controlled from SLT and HANA Studios which is much easier because Programming and configuration steps are very user friendly in HANA and SLT ( only SQL knowledge is required)

 

 

2) In earlier approach BW on HANA projects  used to depend 100 %  BW  for extracting ECC table data but now after SLT in SAP HANA for particularly this scenario we have  no dependency at all in BW

 

3) Complex ABAP routine not required anymore in R/3 or BW  rather PL/SQL  Stored Procedure knowledge is required. SQL/PLSQL  is more relevant now than ABAP .

 

4) BW data load is also obsolete . Data loading is no more required in BW . Data replication is done from ECC to HANA which is triggered based if there is any kind of changes in records for the ECC tables.

 

  5) Resource utilization will be better , as we do not need BW resources here.


Earlier in BW generic extraction from R/3 to BI used to happen using Database View, SAP query and Function Module.

In this document we will see how in HANA we can easily extract data from R/3 tables without creating generic data sources in ECC and BW.

 

Lets see how in BW we used to extract data:

 

In R/3 -  tcode: SE11


step1.jpg

Only transparent tables or database views can be used to create generic extractors in ‘View/Table’

 

Select the tab ‘View/Fields’ and click ‘Table Fields’ button to select the fields of the table which we need.

 

 

step3.jpg

 

 

Select the all required fields and click on the ‘Copy’ button. Then the fields will be copied to the view. click on the ‘Activate’ button to activate it.

 

 

2) Then create data source in ECC- Tcode : RSO2

 

step3.jpg


3) Check the newly created  DS in ECC- Tcode  RSA3

 

4) DS will be available in  for  BW Replication.  Tcode – RSA6

 

In BW :

 

  1. 1) Replicate the newly created DS in ECC

step4.jpg

select Application Component. So select ‘Materials Management’ and check whether DataSource is available under it. It won’t be there because it was created in R/3. So right click on Materials Management’ and click on ‘Replicate Metadata’. Then all the DataSource under MM will be regenerated in BI.

 

 

Using SLT in HANA Studio:

 

In SLT we can easily load ECC data ( from R/3 tables ) into HANA and then we can create HANA Views for reporting

 

The SAP Landscape Transformation (LT) Replication Server is the SAP technology that allows us to load and replicate data in real-time from SAP source systems and non-SAP source to SAP HANA environment. Multiple source system can be connected to one SAP HANA system.

 

 

DATA Provisioning through SLT require RFC/DB connection to SAP/Non-SAP Source System and a DB connection for SAP HANA database. On SAP SLT server we define Mapping and Transformation. Below is a roadmap for data provisioning through SLT.

 

step11.jpg

 

Technique for replicating data in SLT :

 

 

Detail Step: Go to  Data Provisioning option  in HANA studio

 

step5.jpg

step6.jpg

 

 

 

 

 

 

 

An initial load is first performed which  loads data from the source to the target system, and then a replication phase begins that only extract the changes that happened  to the source database  and replicated to the target database, thereby facilitating data replication in real-time . This is fully trigger based.

 

Besides data replication in Real time we can also write filter conditions and simple transformation logic .

 

Different Replication Steps :

 

Different replication Steps  are Load, Replicate, Stop, Suspend and Resume.

 

LOAD: Starts an initial load of replication data from the source system. The procedure is a one-time event. After it is completed, further changes to the source system database will not be replicated.

 

step7.jpg

 

Replicate: Combines an initial load procedure and the subsequent replication procedure (real time or scheduled).
Before the initial load procedure will start, database trigger and related logging table are created for each table in the source system as well as in SLT replication server

 

step8.jpg

Stop: Stops any current load or replication process of a table.
The stop function will remove the database trigger and related logging tables completely. Only use this function if you do want to continue a selected table otherwise you must initially load the table again to ensure data consistency

 

 

 

 

After the tables get scheduled initially the tables get loaded with status executed

 

the tables which is being replicated gets new status as 'In Process'

 

Status of tables changes as:

 

Scheuled-> Executed ->  Inprocess


Now the ECC table will be appearing in HANA Studio,  Please note the Schema where you want to store the tables need to configure in SLT server.

 

In  SLT system   we need to give  Schema name  and  need to provide HANA database connetion parameters.  Tcode is - LTR

step10.jpg



We can create  HANA /SQL views directly on top of these tables that is replicate from ECC

step9.jpg

Data Loading to BW From ECC using ODP - SAP HANA Information Views

$
0
0

ECC on HANA is Source system and BW on HANA is Target system.


Analytic View (ms/ANV) is in Source ECC on HANA system.

Requirement is this Analytic view output data has to be loaded to DSO of BW system.

temp.PNG

As per the classical approach, the data would be loaded into BW PSA via DB Connect / Data Provisioning options. Then DSO loading from PSA.

If PSA data is not deleted on a regular basis, It will increase the cost of data storage, the downtime for maintenance tasks and performance of the data load.


I have done this requirement using ODP - SAP HANA Information Views approach.

In this ODP approach, Data is directly stored into Target DSO from ECC HANA Information Views using DTP.

It is making PSA Storage and Info Package execution are obsolete 


The first step in the data load, ODP - SAP HANA Information Views Connection (RFC ABAP Connection) need to be created in BW system.

In Connection Parameter, the Target as ECC system Host name has to be mentioned.

Once connection Tests are successful, New RFC Connection created under  ODP - SAP HANA Information Views Folder.


temp.PNG


The ODP - SAP HANA Information Views Connection (RFC ABAP Connection) between the Source and Target system is,

temp.PNG

Data source created in BW System. In the pop-up, ODP Context would be displayed as HANA Information Views.

In Operational Data Provider Text Box, The HANA Information View Name (Analytic View / Calculation View) name has to be mentioned.


temp.PNG


The help pop-up show the list of Information views available in the Source ECC HANA System.

From the list, appropriate Information view has to be selected.

Once Data Source Creation is done, no need to create Info Package.

temp.PNG


BW Target would be either Classic DSO / Advanced DSO.

Target DSO/ADSO created and Transformation mapping done from ODP Data Source to DSO.

 

In DTP, ODP - SAP HANA Information Views as Source, Delta Extraction mode won't be supported. So Extraction mode changed to Full.

Under Parameters of the Data source,

Data Extraction selected as "Directly from Source system. PSA not used (for small amount of data)"


temp.PNG


Ignore the Warning Pop-up "Last change is incompatible with SAP HANA Processing".

(DTP SAP HANA Execution Processing mode will be disabled)

temp.PNG


DTP Adapter changed to "Extraction from SAP Source system by Operational Data Provisioning"

temp.PNG


Activate the DTP and Execute the DTP execution. After DTP execution completed, DSO data has to be activated.

DSO Output is,

 

temp.PNG

 

Data is getting loaded into BW DSO successfully from Source SAP HANA Information Views without PSA Storage and Info package execution.

 

Your thoughts and comments are welcomed.

 

Regards,

Muthuram




How Nucor Simplified Their BW Landscape By Using An Embedded BW

$
0
0

This blog is based on a story that Nucor has presented to us. Thankfully, they have allowed us to convert their slides into this blog as we believe it is a valuable example to many other customers on how to move forward when moving their existing landscape to HANA. As you will see, they - in fact - replaced their (stand-alone) BW with a lighter version that sits within their Suite-on-HANA system (embedded BW). It is a pragmatic and real-world example of removing a separate BW instance, thereby reducing (albeit not eliminating) the need for data replication while keeping and reusing existing BW analytic models and the underlying processing by BW. So, Nucor's story goes like this:

Background

During the initial stages of our legacy ECC to HEC ECC (SoH) migration, based on the information provided to us, we thought we could replace our legacy BW completely with native HANA modeling and analytics.

BW Replacement with HANA

On further analysis and discovery, we came to a conclusion that the custom trend analysis reporting we do based on point in time snap shots of data is not going be easy if we were to completely eliminate BW. We had three options to resolve this:
  1. Keep BW: migrate Legacy BW to BW on HANA . This option was expensive as we had to pay for additional BW migration and hosting fees.
  2. Eliminate BW and try to re-develop existing Dashboards, Queries, Broadcasting from Scratch using native HANA modeling and tools. This was also very expensive and very time consuming.
  3. Embedded BW Solution: where we migrate our entire BW to the same database as our ECC HANA instance.

Embedded BW Option

So we considered Embedded BW as a potential solution. The rationale for going this direction - We were already in Nov 2015 timeframe and SoH go live was scheduled for Feb 2016. To go with one of the other options, our business was not ready to pay for additional migration / development costs and we were also running short of time to complete our BW migration in conjunction with our SoH migration. Also our legacy BW footprint was very small and usage is low. Every Monday, a user at each division will run a series of reports for submission to Corporate, and on Sunday, BEx Broadcasting will distribute dashboards via email in PDF format. We also had a brief meeting with SAP SME’s on this direction and received some great information at TechEd. We had a few challenges when we decided to go with Embedded BW route:
  • Technical Challenges:
    • BI Content installation – will this affect ECC SoH?
    • To reduce costs we had one non-prod ECC java connected to both ECC Dev and QA ABAP stacks. Will a single BI Java support multiple backend  ABAP (BW) systems without any issues?
    • System load and performance
  • Migration Challenges:
    • Transfer of BW Objects and Data from Legacy to HEC data centers
    • Setting up of transfer routes and resolving transport issues
    • Does Embedded BW provide all BW functionalities currently in use?

Result

Our team was able to overcome these challenges by working closely with the SAP team (HEC and Services) and successfully completed this challenging migration. We have been using Embedded BW in production for about a month now with no issues.

Embedded BW - Challenges during migration

  • Transports from BWP to ECD-H encountered many activation errors. Errors were resolved by activating the missing objects using BI Content and then re-import the same transports; this cycle repeats multiple times.
  • Transports from ECD-H to ECQ-H also encountered activation errors. Some info objects were not included so multiple attempts were necessary.
  • Data extraction from BWP was time consuming as it required data export from Infocube to table using Open Hub followed by data download from the table to  PC file. The download to PC file has to be repeated many times since the maximum size of each file is about 150 MB. This process was used to handle about 18 GB of data.
  • We encountered various issues with BI Java, many of which pertain to ECD and ECQ sharing the same BI Java server.
  • We also encountered challenges with single-sign on for BI Java as our Active Directory user IDs do not match our SAP user IDs.

Nucor's system landscape
Nucor's system landscape - click to enlarge

Thanks!

To our Nucor colleagues for sharing this with us and allowing us to share it within this blog. This is an excellent real-world example for case 1. as described in the blog S/4HANA and #BWonHANA, maybe with a certain overlap with case 3.

 

This blog has been cross-published here. You can follow me on Twitter via @tfxz.

SAP HANA Project Based Workshop

$
0
0

SAP HANA Project based workshop

 

content

 

three mode

1) Business Process

   1) Business Process

   2) Technical Process

2) one end to end implementation

  3) Project Preparation

  4) Business Blueprint

  5) Realization

  6) Cut Over

  7) GoLive

3) Deployment

  8) Plan

  9) Bluid

  10) Run

4) ERP Cycle

11) Client

  12) Server

  13) Database

  14) Datasource

  15) Integratin to other system

 

for if the consultant learn this 15 subject then he or she is able to do SAP Consultant job, so many consultant come from non-erp or other software field and they think it is very easy and then after six month in sap job, they run away from SAP then they dont return,

 

but if u do this 15 things then u will defenetly get the job, and u will be day by day increasing more skills

SAP HANA Project Based Workshop Content - YouTube

Update – Data LifeCycle Management for BW-on-HANA

$
0
0

Data managed by the SAP BW application can be classified into 3 basic categories:

Hot– high-frequent access (planning, reporting, advanced analytics, data manipulations)

Warm– less frequent access, relaxed performance SLAs (usually batch jobs), simple access patterns

Cold– archive, data lakes

 

For the “warm” data in your BW system SAP HANA offers currently 2 options: the non-active data concept (high-priority RAM displacement) and HANA dynamic tiering (using an ExtendedStorage server). We have evaluated the situation anew and have seen that with the advancements in hardware technology and in SAP HANA we do now have an even simpler and more intriguing option. Instead of introducing a new ExtendedStorage server into your HANA cluster to store the “warm” data you can use standard HANA nodes but with a different sizing formula and RAM/CPU ratio instead.

You basically run an “asymmetric” HANA scale-out landscape: a group of nodes (for your “hot” data) with standard sizing and another group with a “relaxed sizing” (you basically store more of the “warm” data on these nodes than RAM is available – the “extension” group). This allows you to run a HANA scale-out landscape with fewer nodes and less overall RAM but with the same data footprint.

 

Such a setup can be significantly easier to set up and administrate and it offers, right out of the box, all features of HANA with respect to operations, updates, and data management. The differentiation of data into “hot” and “warm” can be done easily via the BW application and using the standard HANA techniques to re-locate the data between nodes.

 

We are currently preparing the boundary conditions for these setups and are in intensive discussion with our hardware partners to enable attractive offers. The goal is to start a beta program in mid-2016. Please stay tuned for more details to follow very soon.

 

Please note that this setup is currently only planned for BW-on-HANA since it heavily relies on BWs partitioning, pruning and control of the access patterns. For applications looking for a more generic “warm” data concept, the HANA dynamic tiering feature is still a valid option. HANA dynamic tiering continues to be supported for BW as well.

AsymmetricLandscape_w_ExtensionGroup.png

SAP BW 7.5 Powered by SAP HANA: Feature Tour and Road Map - SAP TechEd Lecture of the Week

$
0
0

Hello everybody,

The interest of our customers and partners in SAP BW powered by SAP HANA is constantly high. Many questions regarding the SAP BW7.5 release and asks to provide the presentation –  as PPT and as recording. Therefore it is a great opportunity that the "SAP BW 7.5 and Roadmap" presentation at TechEd 2015 has been recorded and is available online.

 

The session:

SAP BW powered by SAP HANA was the very first SAP application that ran completely on top of SAP HANA and has meanwhile reached a great adoption. For many customers it has become the entry point for their SAP HANA roadmap. With the SAP BW 7.5 release customers will benefit from further simplification, integration with SAP HANA EIM, additional Big Data scenariosand the possibility to run SAP BW 7.5 powered by SAP HANA only with the HANA optimized objects. At the end the session will also provide an outlook on SAP’s further DW strategy.

 


Additional interesting sessions



Enjoy in case you did not get a chance on site at TechEd 2015.


Cheers,

Lothar




More Details - HANA Extension Nodes for BW-on-HANA

$
0
0

This blog provides additional details about the new concept in HANA to manage “warm“ data in BW. It is follows up on my blog where I initially introduced this idea: Update – Data LifeCycle Management for BW-on-HANA

What are deployment options for HANA Extension Nodes?

There are basically three different deployment options for extension nodes in HANA system for BW. Which option you choose depends on your landscape, the sizing for the amount of “warm” data in your system, BW release, HW partner, … and, of course, the timeline.

ExtensionNodeDeployments.JPG

Why does it work for BW?

The standard HANA sizing guidelines allow for a data footprint of 50% of the available RAM. This ensures that all data can be kept in RAM at all times and there is sufficient space for intermediate result sets. These sizing guidelines can be significantly relaxed on the Extension Group, since “warm” data is accessed

  • less frequently,
  • with reduced performance SLAs,
  • with less CPU-intensive processes,
  • only partially at the same time.

The BW application controls and understands the access patterns to BW tables and derives appropriate partitioning and table distribution for “warm” tables. This way BW ensures that a “warm data” table is not loaded completely to memory, but only partially due to efficient partition pruning. The load to memory of the much smaller table partitions is not critical in the usual BW operations (batch load processes).

Based on the modelling object type BW can automatically provide a very good default for the “warm” setting.

  • Up to 50% of BWs data can be classified as “warm” (experience with “non-active” data concept)
  • access to “warm” tables is partition-based in >95% of all cases (write(=merge)&read)
  • data in “warm” tables is part of batch processes in most cases (load-to-memory not critical)
  • query access to “warm” data will be significantly slower – must be accepted/part of the deal

Write_Extract_Pattern.PNG

How to classify BW objects as “warm”?

The classification of a BW object as “warm” is part of the modeling task in the corresponding modeling UI. The default for all objects is “hot”.

  • A newly created object classified as “warm” has all its database tables created on the “extension” node(s)
  • An object containing data does not change the location of its tables immediately during object activation, but only changes the metadata of the object. To move the tables there are two alternatives:
    • Execute a table redistribution using the SAP DWF DataDistributionOptimizer (DDO) – this can be seen like a regular house-keeping action,
    • Use transaction RSHDBMON to move single tables/partitions manually.

 

What type of objects can be classified as “warm” in BW?

This paragraph describes which BW objects can be classified as “warm” and in which BW release the option is available. It does not mean that all these objects necessarily should be classified as “warm” – it depends on the individual use case.

BW Object

Available release

Comment

Caution

InfoCubes

not available

Please look at the options for advanced DSOs

 

Classic DSOs (exception see below)

not available

Please look at the options for advanced DSOs

 

DataSources/PSA tables

BW7.4 SP10

A PSA table can be classified as “warm”. PSA tables are partitioned grouping together one or more load requests. Load operations only change the latest partition

--> small amount of data for the MERGE process. Extract operations only use the latest partition in most cases (delta loads).

 

Write-optimized DSOs

BW7.4 SP10

See PSA comment

Only write-optimized DSOs with usage type Corporate Memory should be classified as “warm”. I.e. no reporting access, no heavy look-up usage

Advanced DSOs w/o Activation

BW7.4 SP10

Partitioning and access similar to PSA

See w-o DSO

Advanced DSOs w/ Activation

BW7.5 SP01

Load and Extract patterns are request/partition-based – similar to PSA tables

DSO-activation needs to load and process the complete table in memory è only aDSOs should be classified as “warm” with very infrequent load activity; use RANGE partitioning of the aDSO where possible to allow pruning

Advanced DSOs with reporting access

BW7.5 SP01

Load patterns are request/partition-based – similar to PSA tables

Query read access may load the complete table (all requested attributes/fields) to memory and query processing may be very CPU-intensive. Only classify objects with

  1. Very infrequent reporting access
  2. Highly selective access (few fields, selective filters hitting RANGE partition criterions if available)
  3. Relaxed performance expectations due to load to memory & less CPU

RANGE- Partitions of Advanced DSOs

BW7.5 SP05

(planned)

Selected RANGE partitions of aDSOs can be classified as “warm”.

Load and Read patterns are request/partition-based – similar to PSA tables.

DSO-activation does partitioning pruning and loads and processes the complete partitions to memory --> only aDSOs partitions should be classified as “warm” with very infrequent load activity

What is the impact for the HANA system?

A HANA system with extension node(s) first of all looks and behaves like a standard HANA scale-out system. All operations and features&functions work as before (like system replication, …).

However there are a few things that should be considered:

  • The HANA system can now store more data, which has an impact on backup and recovery times. Especially the higher data volumes on the extension node(s) may now dominate the backup and recovery times – these depends on the hardware for the HANA system.
  • Forced unloads are now very common on the extension node(s). On the “hot” nodes many unloads are a sign of insufficient sizing.
  • In option 3 and – possibly depending on the choice of hardware – also in option 2, the setup of High Availability using host-auto-failover may need to be adjusted. If no dedicated standby for the extension node exists, it may be necessary to explicitly fall back to the original configuration as soon as a failing node is brought online again.
  • For non-BW data the classification “warm” with the re-location to the Extension Node(s) is not supported. If non-BW data is stored in the same HANA DB this data has to be located on the classic nodes.

When will the new concept be available?

General availability for the new concept with the deployment options 1 and 2 is planned for Q3 2016. Offerings for option 3 are planned for the end of the year.

Prerequisites:

  • SAP HANA SP12
  • BW release: see feature table above – we clearly recommend BW7.5 SP01 or higher

Is anyone still planning to deploy BW-on-HANA via Scale-Out?

$
0
0

Scale-out or Scale-up - that is the question!  I'm preparing for my SAPPHIRE/ASUG session (A4526: Scale-up Architecture Makes Deploying SAP HANA Simple) and I'm trying to get a sense for what new HANA implementations are planning when it comes to hardware infrastructure.

 

With a fear of showing my age, I want to draw a comparison to a time when scale-out data centers were so common that the term scale-out wasn't used because it was considered normal.  Last night's episode of Silicon Valley (the data center scene for you SV fans) made me think back to my early days as an MVS systems programmer - we outgrew our data center and had to build a new one with several times the square footage as the original.  While it seemed like a great idea at the time, Moores Law and improvements in software reduced the required hardware footprint by so much that we ended up with enough unoccupied raised-floor space to accommodate a Coldplay concert! 

 

Throughout the years, multi-node solutions have been replaced with single node equivalents - back in the day, large SAP deployments were based on multi-node relational databases because of capacity requirements.  We wouldn't even consider doing this today (with a few exceptions).

 

By now, I'm sure you can see that it is my opinion that only the absolute biggest BW systems (greater than 16TB of HANA memory) require the scale-out model.  I’m surprised when I read forums and blogs where customers describe their up-coming scale-out deployments.  I believe that the scale-out approach adds a significant amount of complexity while providing only minor benefits.  With that said, I'm sure that some of you disagree with my opinion and have thoughtful reasons as to why you prefer a scale-out approach.

 

Please comment as you see fit and stop by my session on Thursday at 2PM so we can compare notes.

 

Thanks,

Joe Caruso

Force Awakens: BW powered by HANA

$
0
0

HANA impressed the community with its unique capabilities for sure, once we saw the facts live, brought by HANA, we all started to consider necessity of BW including me. Because HANA live analytics totally allured our minds and we have lost our wisdom.

However, BW stayed cool and gave us time to digest the innovations brought by HANA.

Last week we have migrated Turkey's biggest companies BW HANA migration, almost with zero defect. 1.6 TB data streched to 300 GB, query performance improwed  approximately 20 times. All reporting and solution landscape including BPC moved to HANA. ERP migration to HANA is started, we have converted the existing system for Sandbox purposes, HANA optimized transactions' performance results were incredible.

Next step is to enable Real time operational reporting but how?

Let's assume that BW is not supported by HANA, only available with other dbs; then there is only one way to get benefit from HANA live is requiring eclipse based modeling tools or simply HANA Studio. All virtual data models should be modelled and enhanced in this new envorinment (at least for most of us).  But what will happen to analytic and reporting heritage and investments that shaped in many years. Not only from solution perspective but also from people point of view.

Thanks to SAP, our trusted long way pal BW is evolved faster than us and became the new era's mega trends foundation by keeping the heritage. Now BW is guiding the whole community and moving to next stage.

Just a small example to express my self: your organization is answering these questions by BW with 1 day latency using SAP ERP extractors:

  • How many open quantities for each order item? What is the value?
  • What are my top 10 materials based on incoming orders?



With SAP ERP on HANA powered by Hana Live, your existing BW running on HANA serving the same portion of information real-time !

BW_HANA_reloaded_Sarhan_20.jpg

view called SalesOrderItemNetAmountQuery can be base for your virtual data provider:

You can create a VirtualProvider based on a SAP HANA model. You can merge the VirtualProvider in a CompositeProvider or MultiProvider with other InfoProviders. This function also allows you to use the OLAP functions in the BW system to analyze the SAP HANA models. It is suited for more stable and longer term scenarios:

BW_HANA_reloaded_Sarhan_40.jpg

Now it is time to plan and consume mixed scenerios for the real benefits of HANA.

Cheers,

Sarhan.

How to Create and Maintain Info Objects Using Eclipse Data Modelling for SAP BW 7.5

$
0
0

Beginning with SAP BW 7.5, modelling in the Data Warehousing Workbench (SAPGUI) has been replaced by Eclipse-based Modelling tools.

Eclipse Modelling tools provide a unified modelling environment for configuration, management, and maintenance of BW and HANA metadata objects.

In this post I will walk you through the process of creating and maintaining Info objects using Eclipse. I'll also discuss some additional features introduced in SAP BW 7.5.

 

Following are the major feature differences between SAP BW 7.4 and SAP BW 7.5 Edition for SAP HANA:

 

Info Object Features

SAP BW 7.4

SAP BW 7.5 (Edition for SAP HANA)

Time characteristics as attributes for 0CALMONTH, 0FISCPER etc.

No

Yes

Extended characteristics such as Unit Conversion, Miscellaneous, and Extended

No

Yes

View Data element, SID table, Master table, text table, and hierarchy table for info object

Yes, available at corresponding tabs at info object

Not available in Eclipse.

Can be viewed in SAP GUI.

Support for INT8 Key Figures

No

Yes (overcome the 2 billion limitation for integer values)

Disable display attributes in Reference info object

No

We can hide the attributes of reference info objects in Attributes tab using a property called Visible to Consumers.

Additional runtime Properties

Available in Transaction Code RSRT

Available as Runtime Properties tab in Info object

 

In SAP BW 7.5 (Edition for SAP HANA), two additional tabs for maintaining Extended and Runtime Properties are introduced in the info object design/maintenance screen.

 

Procedure for creating Info object in BW Modelling tools:

Right Click on the info area to create info objects.

1.png

Enter name and description for the Info object and Info object type with Data Type, Click on 'Finish.'

2.png

 

List of tabs available in Info object:

General Tab:

3.png

You will land on the default general tab. Check Master Data, Texts, and Usable as InfoProvider in Properties panel for Info object as info provider.

Options to create external HANA views for your characteristic are also available. This option enables you to use these views in other HANA models.

  1. External SAP HANA View for Master Data – Attribute view
  2. External SAP HANA View for Reporting – Analytics view.

If you enable master data and texts, you will see new tabs added below for Master data and texts, for Usable as Info provider - Runtime Properties.

You also have the option to edit the data type and the data length.

 

Master data/Texts Tab:

Unlike BW 7.4, master data and text table names (though unchanged) are no longer displayed.

4.png

Hierarchies Tab:

5.png

Attributes Tab:

Switch to the attibutes tab and click on 'Add' button to create display, navigation, and XXL attributes for the characteristics.

6.png

BI Clients Tab:

If you want the initial value in the query output to be displayed on top, select Include Initial Value in Sort Sequence.

7.png

Extended Tab:

8.png

Runtime Properties Tab:

Various settings at query runtime in Transaction Code RSRT are displayed as Runtime properties.

9.png

 

Hiding display attributes in Reference Info object:

When you create a new info object by referencing another info object, display attributes of the reference objects are also displayed in query designer. In BW 7.5, you have an option to hide display attributes of the referenced object (if required).

Create a custom Info object YCOSTCENT with reference to Info object 0COSTCENTER.

10.png

11.png

Visible to Consumers is default-checked to all the attributes.

12.png

Visible to Consumers property is available only in Reference info object display attributes. It can be used to disable/hide display attributes of Reference info object.

 

In this case, uncheck the Visible to Consumers checkbox for display attribute 0BUS_AREA.

13.png

Check the info object YCOSTCENT– no errors and Info object YCOSTCENT is successfully activated.

14.png

How can you ensure 0BUS_AREA display attribute of YCOSTCENT info object is really hidden?

Create a query on top on info object YCOSTCENT.

15.png

Provide the required details.

16.png

You can see below, YCOSTCENT – 0BUS_AREA attribute is not visible in info provider view.

17.png

At the same time it is visible in the 0COSTCENTER info object and creating a query on it will display 0BUS_AREA attribute in query view.

18.png

A Change for the Better?

SAP BW 7.5 makes life a little easier for developers by incorporating Eclipse-based Modelling tools for data modelling. This change provides a unified modelling environment for BW and HANA metadata objects - a substantial improvement.

SAP BW on HANA Advanced DSO

$
0
0

SAP BW on HANA 7.4 SP10 new changes has happened, will create Advanced DSO, composite provider and Open ODS view will create only in BW molding tools not in SAP GUI

 

Look forward will use only 3 objects BW data models

1. Advanced DSO – Persistent layer

2. Open ODS view – Virtual layer

3. Composite Provider- Joins/unions

 

This blog will describe Advanced DSO.

 

Advanced DSO have persistent object is combination of below objects

1. Field based structure of  PSA ( filed based modeling)

2. No activation required for like WDSO (update property)

3. 3 tables have (active, new, change log table ) like standard DSO (over write property)

4. Info cube model – select setting – all char are keys ( addition property)

 

Below is the changes are happened in object level

System version BW 7.40 SP10

 

1.png

Advanced DSO tables and purpose to use

2.png

Below tables are generated while activating the Advanced DSO (ADSO Name – ZODS_PAN1)

3.png

 

In this documents will discuss about, I can say Advanced DSO first model.

While creating the ADSO will select the active/compress data with check box write change log data.

If you select above options it will act as the standard DSO and it will generate 3 tables + 2 tables.

 

Modeling ADSO in Eclipse using BW modeling tools

 

Go to HANA STUDIO -> BW Modeling tools

Select the info area -> right click -> select Advanced DSO object

Below screen will appear

4.png

Ex ADSO Name – ZODS_PAN1

 

Create form templates

 

  • None –  will not select any object
  • Data Source – if you select the data source need to give the source system name and data source name -> will create ADSO using the data source but data source has to create with respective source system.
  • Info provider -> can use the existing info providers.

 

Click on finish then below screen will appear

 

Modeling properties

 

Active/Compress data

            Write change log table -> if you select data will insert into table

Keep inbound data, extract form inbound table -> if you select will keep the data in bound table and available for reporting

Unique data records – if you select duplicate records not allowed.

All char are keys, reporting on union of inbound and active tables – if you select this option ADSO will act as CUBE.

Extended table - The extended table’s concept relates to warm data and optimize the main memory resource management in SAP HANA by using extended tables

5.png                                                                                            

I will discuss about first model – write change log table with activate/compress data

Go to details tab -> below screen will appear

Add the info objects and key figures

Then select the keys for ADSO according to Keys data will transfer.

6.png

7.png

Selected customer as key fields in ADSO will use 120 key fields as maximum

8.png

Then create the data source in BW system and map the transformation form data source to ADSO

Aggregation type – overwrite and summation

9.png

ADSO Manage screen

Request generated as timestamp nothing but RTSN (Request Transactional Serial Number) while RTS Number we can find easily when it was loaded.

Another advantage for request generation in standard DSO while loading it will generate the BI_REQ but ADSO have Request TSN – Request transactional serial number.

 

When you activate a transformation, the system checks whether it can be executed in SAP HANA.

 

Whether this will be possible basically depends on the operations and objects used.

The following operations and objects are not supported for executing the transformation in SAP HANA:

Queries, as Info Providers are not supported as a source.

ABAP routines are not supported (rule type routine, routines for characteristics, start, end, and expert). 

Rule groups are not supported.

10.png

 

Next Blog will discuss ADSO 2nd Model.

 

Thanks,

Phani.

Changing Fox formula to an own planning function type based on AMDP running in memory (PAK)

$
0
0

Hello,

 

due to the fact that there is not so much information available concerning the optimization for HANA  I would like to share some experiences. First of all this is an example. There are different ways to get good performance in HANA in combination with PAK. FOX Formulas which are using ABAP fuction modules are not running in memory. The first possibility whould be to change the calls for function modules to "read for reference data" from external aggregation layers. I have also done that which the same example but that would be another blog. Another possibility is to create an own planning function type based on an AMDP class.

 

This is what I would like to show now. The fox formula is distributing data from montly level to daily level. But it is only doing that for working days. That means the math is relatively easy. It is value per month divided by working days * 1 (if it is a working day) or * 0 if it is not a working day.

 

At first the old code in fox formula (at the time where the code was created there was no possibility to use internal tables in fox - the coding is simplified - we are using more key figures and characteristics but that is not important for the idea how it is working):

DATA CALYEAR TYPE 0CALYEAR.
DATA CALMONTH2 TYPE 0CALMONTH2.
DATA CALDAY TYPE 0CALDAY.
DATA ARBT TYPE F.
DATA TAGE TYPE F.
DATA VALUETAG_YUMVISTU TYPE F.
DATA COUNTER TYPE I.
DATA DATUM TYPE 0CALDAY.
DATA HOLIDAY TYPE STRING.
DATA DUMMY TYPE F.

FOREACH CALYEAR, CALMONTH2, CALDAY.
 
IF CALDAY IS INITIAL.

      CALL FUNCTION Z_ANZ_ARBT_TAGE_MON
   EXPORTING
     I_MONAT
= CALMONTH2
     I_JAHR
= CALYEAR
   IMPORTING
     E_VAL_ARBT
= ARBT
     E_VAL_TAGE
= TAGE.

   VALUETAG_YUMVISTU
= { YUMVISTU, #, CALMONTH2, CALYEAR } / ARBT.

   COUNTER
= 0.
  
DO.
     COUNTER
= COUNTER + 1.

     CALL FUNCTION Z_WORKING_DAY_CHECK_OF
     EXPORTING

       MONAT = CALMONTH2
       TAG
= COUNTER
       JAHR
= CALYEAR
     IMPORTING
       HOLIDAY_FOUND
= HOLIDAY
       E_DATUM
= DATUM.

          IF HOLIDAY = 'X'.


    
ELSE.
      
{ YUMVISTU, DATUM, CALMONTH2, CALYEAR } = VALUETAG_YUMVISTU + { YUMVISTU, DATUM, CALMONTH2, CALYEAR }.
    
ENDIF.
    
IF COUNTER = TAGE.
      
{ YUMVISTU, #, CALMONTH2, CALYEAR } = 0.

      
EXIT.
    
ENDIF.

  
ENDDO.
ENDIF.

ENDFOR.

 

Before you start creating the coding in the new modelling tools via HANA Studio / Eclipse (AMDP is not officially supported in SAP GUI), you need to generate the calender data in HANA Studio.

 

generate_time_data.jpg

 

generate_time_data_2.jpg

That is the basis for the next steps. Now in the SQL Script you need to get the information which day is a working day and which day is not. Furthermore the total sum of working days per month is needed for calculation.

 

For testing purposes it is easier to do the first steps without an ABAP Managed Database procedure. Which means if you do have the necessary rights you can create the procedure directly on the HANA DB. Example Code for that (you can find a detailed description of the coding below in the class example):

createprocedureSAPSR3.myprocedurename ( ) languagesqlscriptreadssqldataas

begin

DECLAREl_FactoryCalendarIDVARCHAR(2);

DECLAREl_durationint;

 

l_FactoryCalendarID := '01';

 

lt_tab =

SELECTDISTINCTDATE_SAPASDATE_SAP , DATE_SAP + 1 ASSAP_MORGEN

FROM"_SYS_BIC"."ottofuchs.test/TIME";

 

lt_input = SELECTdistinct

  "DATE_SAP"as"StartDate",

  "SAP_MORGEN"as"EndDate",

  l_FactoryCalendarIDas"FactoryCalendarId",

  l_durationas"Duration"

FROM :lt_Tab;

 

CALL"_SYS_AFL"."ERPA_FACTORY_DAYS_BETWEEN_DATES_PROC"( 'SAPSR3', 'SAPSR3', :lt_input, :lt_result );

 

lt_result2 = SELECT"StartDate" , "YEAR" , "MONTH" , "Duration"FROM :lt_resultASAJOIN"_SYS_BIC"."ottofuchs.test/TIME"asbON

  1. a."StartDate" = b."DATE_SAP"

  orderby"StartDate";

 

lt_result3 = Select"YEAR" , "MONTH" , SUM("Duration") as"Days"FROM :lt_result2GROUPBYYEAR , MONTH;

 

 

selecta."StartDate" , a."YEAR" , a."MONTH" , a."Duration" , b."Days" , 1000 / b."Days" * a."Duration"as"Amount"

  FROM :lt_result2asajoin :lt_result3asb

  ona."YEAR" = b."YEAR"and

        a."MONTH" = b."MONTH";

end;

 

If you are interested in the parameters of the procedure ERPA_FATORY_DAYS_BETWEEN_DATED_PROC just take a look at it in HANA Studio:

Prameters1.jpg

Parameters2.jpg

 

You can try it by calling:

callmyprocedurename;

 

The result is:

result.jpg

Ok, this has to combined with the montly data from the aggregation level. You can see the full coding below. After distribution the montly value should be zero. Due to the fact that we have to deliver a delta information at the end we have to negate the original value. That is basically the SQL Script part but now we do need an AMDP. The AMDP has the advantage that you can use ABAP transport system instead of using HANA transport system additionally to ABAP system which makes it more difficult to synchronize.

 

The is a report "RSPLS_SQL_SCRIPT_TOOL" from SAP which gives you some sample coding if you add an aggregation level and a type of a function:

report.jpg

 

As a result you will get some example coding whith the needed type for the aggregation level and some basic parts of the class which needs to be created. In this coding maybe the option OPTIONS READ-ONLY. is missing. This can result in strange error messages if you have not set the parameters in a way which allows to write data in SQL Script. If you don't want to write data you should add the read-only option.

 

Now you have to create a class (in Eclipse). And than create a new planning function type in transaction RSPLAN (NW7.4):

functiontype2.jpg

functiontype.jpg

 

The new version as an AMDP:

CLASS ZBW_PLFU_MON_DAY DEFINITION

  PUBLIC

  FINAL

  CREATE PUBLIC .

 

  PUBLIC SECTION.

* This is the structure of the aggregation level

    TYPES: BEGIN OF Y_S_VXSD004X1,

             CALDAY     TYPE /BI0/OICALDAY,

             CALMONTH2  TYPE /BI0/OICALMONTH2,

             CALYEAR    TYPE /BI0/OICALYEAR,

             CO_AREA    TYPE /BI0/OICO_AREA,

             DISTR_CHAN TYPE /BI0/OIDISTR_CHAN,

             INFOPROV   TYPE RSINFOPROV,

             MATERIAL   TYPE /BI0/OIMATERIAL,

             MAT_PLANT  TYPE /BI0/OIMAT_PLANT,

             MAT_SALES  TYPE /BI0/OIMAT_SALES,

             PLANT      TYPE /BI0/OIPLANT,

             PROFIT_CTR TYPE /BI0/OIPROFIT_CTR,

             SALESORG   TYPE /BI0/OISALESORG,

             SALES_DIST TYPE /BI0/OISALES_DIST,

             SOLD_TO    TYPE /BI0/OISOLD_TO,

             VTYPE      TYPE /BI0/OIVTYPE,

             YVERSIOP   TYPE /BIC/OIYVERSIOP,

             YUMVISTU   TYPE /BIC/OIYUMVISTU,

           END OF Y_S_VXSD004X1.

    TYPES: Y_T_VXSD004X1 TYPE STANDARD TABLE OF Y_S_VXSD004X1.

    INTERFACES IF_RSPLFA_SRVTYPE_TREX_EXEC.

* If you need ref_data you need to use the interface IF_RSPLFA_SRVTYPE_TREX_EXEC_R / IF_RSPLFA_SRVTYPE_IMP_EXEC_REF

    INTERFACES IF_RSPLFA_SRVTYPE_IMP_EXEC.

    INTERFACES IF_AMDP_MARKER_HDB.

 

CLASS-METHODS: MON_TO_DAY IMPORTING VALUE(I_VIEW) TYPE Y_T_VXSD004X1

                              EXPORTING VALUE(E_VIEW) TYPE Y_T_VXSD004X1.

  PROTECTED SECTION.

  PRIVATE SECTION.

  ENDCLASS.

 

CLASS ZBW_PLFU_MON_DAY IMPLEMENTATION.

 

  METHOD IF_RSPLFA_SRVTYPE_TREX_EXEC~INIT_AND_CHECK.

    E_TREX_SUPPORTED  = RS_C_TRUE.

  ENDMETHOD.

 

  METHOD IF_RSPLFA_SRVTYPE_TREX_EXEC~TREX_EXECUTE.

    DATA: L_R_SQL_SCRIPT   TYPE REF TO IF_RSPLS_SQL_SCRIPT,

          L_PROCEDURE_NAME TYPE STRING,

          L_T_IOBJ_PARAM   TYPE IF_RSR_PE_ADAPTER=>TN_T_IOBJ_PARAM,

          Y_T_VXSD004X1 TYPE Y_T_VXSD004X1.

    L_R_SQL_SCRIPT = CL_RSPLS_SESSION_STORE_MANAGER=>GET_SQL_SCRIPT_INSTANCE( I_R_STORE = I_R_STORE ).

    DATA(METHOD) = NEW ZBW_PLFU_MON_DAY( ).

    METHOD->MON_TO_DAY(

      EXPORTING

        I_VIEW = Y_T_VXSD004X1

      IMPORTING

        E_VIEW = DATA(LT_E_VIEW)

     ) .

 

    L_PROCEDURE_NAME = 'ZBW_PLFU_MON_DAY=>MON_TO_DAY'.

    R_S_VIEW-VIEW = L_R_SQL_SCRIPT->EXECUTE_SQL_SCRIPT(

        I_VIEW         = I_VIEW

        I_T_IOBJ_PARAM = L_T_IOBJ_PARAM

        I_PROC_NAME    = L_PROCEDURE_NAME

        I_R_MSG        = I_R_MSG ).

  ENDMETHOD.

 

  METHOD MON_TO_DAY  BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY.

 

    DECLARE l_FactoryCalendarID VARCHAR(2);

    DECLARE l_duration int;

 

    -- Factory calender in our case NRW

    l_FactoryCalendarID := '01';

 

    -- Generated time data in HANA is the basis for the further steps

    lt_tab1 = SELECT DISTINCT DATE_SAP AS DATE_SAP , DATE_SAP + 1 AS SAP_MORGEN

    FROM "_SYS_BIC"."ottofuchs.test/TIME";

 

    -- Prepare for HANA procedure

    lt_tab2 = SELECT distinct

    "DATE_SAP" as "StartDate",

    "SAP_MORGEN" as "EndDate",

    l_FactoryCalendarID as "FactoryCalendarId",

    l_duration as "Duration"

    FROM :lt_Tab1;

 

    -- Calculate working days between dates

    CALL "_SYS_AFL"."ERPA_FACTORY_DAYS_BETWEEN_DATES_PROC"( 'SAPSR3', 'SAPSR3', :lt_tab2, :lt_tab3 );

 

    -- Add Year and month and remove enddate

    lt_tab4 = SELECT "StartDate" , "YEAR" , "MONTH" , "Duration" FROM :lt_tab3 AS A JOIN "_SYS_BIC"."ottofuchs.test/TIME" as b ON

    a."StartDate" = b."DATE_SAP"

    order by "StartDate";

 

    -- Working days per month

    lt_tab5 = Select "YEAR" , "MONTH" , SUM("Duration") as "Days" FROM :lt_tab4 GROUP BY YEAR , MONTH;

 

    -- Add working days per month to every single day

    lt_tab6 = select a."StartDate" , a."YEAR" , a."MONTH" , a."Duration" , b."Days"

    FROM :lt_tab4 as a join :lt_tab5 as b

     on a."YEAR" = b."YEAR" and

        a."MONTH" = b."MONTH";

 

    --  Create cartesian product about calender days per month

    --  Calculate monthly value divided by working days per month * 1 if working day, otherweise * 0

    e_view = select b."StartDate" as "CALDAY" , "CALMONTH2" , "CALYEAR" , "CO_AREA" , "DISTR_CHAN" ,

                  "INFOPROV" , "MATERIAL" , "MAT_PLANT" , "MAT_SALES" , "PLANT" , "PROFIT_CTR" ,

                  "SALESORG" , "SALES_DIST" , "SOLD_TO" , "VTYPE" , "YVERSIOP" ,

                  "YUMVISTU" / b."Days" * b."Duration" AS "YUMVISTU"

            FROM :I_VIEW AS A JOIN :LT_TAB6 AS B ON

                   A."CALYEAR" = b."YEAR" and

                   a."CALMONTH2" = b."MONTH"

             WHERE a.CALDAY = '00000000'

             UNION

    --       Negate the values which are distributed from montly level (the result of the procedure has to be the delta)

             select "CALDAY" , "CALMONTH2" , "CALYEAR" , "CO_AREA" , "DISTR_CHAN" ,

                    "INFOPROV" , "MATERIAL" , "MAT_PLANT" , "MAT_SALES" ,

                    "PLANT" , "PROFIT_CTR" , "SALESORG" , "SALES_DIST" ,

                    "SOLD_TO" , "VTYPE" , "YVERSIOP" ,

                    "YUMVISTU" * -1 AS "YUMVISTU"

             from :i_view WHERE CALDAY = '00000000';

  ENDMETHOD.

ENDCLASS.


As you can see ERPA_FACTORY_DAYS_BETWEEN_DATES_PROC is used for getting the information if it is a working day or not (instead using a functional module). Basically it is also possible to debug the procedure or the AMDP in Eclipse tools.


On Oracle the fox coding took around 300 seconds for 25.000 as the input with around 350.000 records output. With the optimized code on HANA this takes 12 seconds and that is for copying 25.000 records and distributing from month to day together! Not to bad in my opinion. Without improving the coding the hana combined with a new application server is faster then the oracle db but our experience is that it is often just 2 or 2,5 times faster without code pushdown. This is maybe only valid for our system but it gives an impression on performance impact.

Viewing all 130 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>