Are you looking to extract information from the STXL table from your SAP S/4 HANA system into SAP Datasphere in a way that not only transforms the content into readable text lines but also supports an automated pseudo-delta mechanism? If so, this blog is for you!
In this blog, we’ll walk through a step-by-step approach to extract and decode LRAW SAPscript values from the STXL “Text File Lines” Table, making them suitable e.g. for (statistical) text analysis, using SAP Datasphere as the storage for the information in the new SAP Business Data Cloud (BDC) Landscape.
Occasionally, Purchase Orders in your organization may have a ‘Deletion Indicator’ set on the header level (e.g. EKKO-LOEKZ = ‘X’). In such cases it can be:
Challenging to determine the reason for deletion.
Difficult to identify a potential pattern, that might impact Month-End Closing (MEC) data analysis for e.g. Spend insights.
As a best practice, your organization requires users to document such non-standard deletions using the ‘F01’ Header Text field in the Purchase Orders (EKKO):
While this data is viewable in the S/4 HANA GUI (e.g. via the ‘Display Purchaser Order’ transaction), bulk access for analytics is limited due to the following challenges:
Information is stored in the STXL table in LRAW(7902) ‘encoded’ format.
This format is not directly extractable towards SAP Datasphere.
Even if extracted 1:1, the raw format is not usable without decoding.
For instance, the Header Text (TDID = ‘F01’) for Purchase Order (TDNAME = ‘4500002231’) is stored in encoded format in the STXL table:
It only becomes ‘human-readable’ when processed using the READ_TEXT Function Module:
The question, then, is: how can we structure and automate the extraction of this data to SAP Datasphere for analysis?
To make the data usable in Datasphere, you need to implement the following components in your S/4 HANA system. Please note that this is a similar approach as was done previously with a customized Datasource, loading e.g. towards a SAP BW system:
Custom Table: ZSTXL_TDLINES
This table will store delta entries. Instead of storing one row per entity, the table is designed to pivot multiple lines (e.g., 10) into separate columns for each TDOBJECT/TDNAME/TDID/TDSPRAS key combination:
Alternatively, an additional ‘Sequence’ key field can be added, to have multiple rows per entity with only one column.
2. (Optional) Parameter Table
Optional, but used for this blog to control which objects (e.g., TDOBJECT) are inserted into the table:
Where the ‘@AbapCatalog.dataMaintenance : #ALLOWED’ annotation enables table maintenance possible, so the following entry can be added manually:
3. Extracion-enabled CDS View
A CDS view is created on top of ZSTXL_TDLINES to be used by a Replication Flow in SAP Datasphere. Since ZSTXL_TDLINES inherently tracks delta records (based on the ABAP Program detailed out in the next step), additional delta.changeDataCapture annotations in the CDS view may not be necessary. Optional filters, such as TDID = ‘F01’, or on the TDOBJECT, can be applied to mimic an extractable view similar to the historical BW Datasource setup –per entity-:
4. An ABAP Program, ZSTXL_TDLINES_DELTA_FILL, that:
Definitions, identifies new (delta) records and deletes outdated records to keep the table small:
Combining the STXL (Content), STXH (Changed On Date) and ZSTXL_EXTR_P (EKKO-TDOBJECT value as optional parameter):
A (first) LOOP, calling the READ_TEXT Function Module, based on the above obtained records and filling a base structure:
A secondary LOOP (within the first LOOP), to pivot the TDLINE entries to different columns, and the INSERT into the ZSTXL_TDLINES table:
5. A Job, J_ZSTXL_TDLINES_DELTA_FILL, to schedule the ABAP Program and fill the ZSTXL_TDLINES consistently
Via SM36, the Job can be created and scheduled, which for this blog is scheduled to run every 4 hours:
After execution of the J_ZSTXL_TDLINES_DELTA_FILL job, the ZSTXL_TDLINES table can be checked. And indeed, the expected entry is now available:
Please note that this entry will be deleted at the 24th of May, as then the 22nd will be < than the 24th (sy-datum) - 1, keeping the table small.
Adding the CDS view (ZV_ZSTXL_TDLINES) to a Replication Flow enables data loading into SAP Datasphere:
You can then additionally:
Join e.g. with the EKKO / EKPO tables.
Filter on the Deletion Indicator (e.g. LOEKZ = ‘X’).
Analyze the potential reasons for deletion using the decoded Header Texts.
As a double-check, a simple CDS view solely on the STXL table can also be created and enabled for extraction. Looking at the Data Preview in S/4 HANA, it shows the data as ‘desired’ from a raw tabular point of view (however, this is not usable without the READ_TEXT Function Module anyway):
But because of the LRAW(7902) type of the CLUSTD field, it is not made available in the extraction structure within Datasphere itself:
Objective: Automate delta-enabled extraction of STXL Text Lines, transforming them for reporting use in SAP Datasphere.
Outcome: A structured approach using a custom ABAP Program, table and CDS view to enable the objective.
If you would like to know more about using the STXL Text File Lines information of your S/4 HANA system within the SAP Datasphere and Business Data Cloud (BDC) context? Please contact Nico van der Hoeven (+31651528656).
Als innovatiepartner willen wij graag blijven inspireren. Daarom delen wij graag onze meest relevante content, evenementen, webinars en andere waardevolle updates met jou.