LO – Cockpit

Logistic Cockpit (LC) is a new technique to extract logistics transaction data from R/3.

All the DataSources belonging to logistics can be found in the LO Cockpit (Transaction LBWE) grouped by their respective application areas. 

The DataSources for logistics are delivered by SAP as a part of its standard business content in the SAP ECC 6.0 system and has the following naming convention. A logistics transaction DataSource is named as follows: 2LIS_<Application>_<Event><Suffix> where,
  • Every LO DataSpurce starts with 2LIS. 
  • Application is specified by a two digit number that specifies the application relating to a set of events in a process. e.g. application 11 refers to SD sales. 
  • Event specifies the transaction that provides the data for the application specified, and is optional in the naming convention. e.g. event VA refers to creating, changing or deleting sales orders. (Verkauf Auftrag stands for sales order in German). 
  • Suffix specifies the details of information that is extracted. For e.g. ITM refers to item data, HDR refers to header data, and SCL refers to schedule lines.
Up on activation of the business content DataSources, all components like the extract structure, extractor program etc. also gets activated in the system.

The extract structure can be customized to meet specific reporting requirements at a later point of time and necessary user exits can also be made use of for achieving the same.

An extract structure generated will have the naming convention, MC <Application> <Event>0 <Suffix>. Where,  suffix is optional. Thus e.g. 2LIS_11_VAITM, sales order item, will have the extract structure MC11VA0ITM.

Delta Initialization:
  • LO DataSources use the concept of setup tables to carry out the initial data extraction process. 
  • The presence of restructuring/setup tables prevents the BI extractors directly access the frequently updated large logistics application tables and are only used for initialization of data to BI.
  • For loading data first time into the BI system, the setup tables have to be filled.
Delta Extraction:
  • Once the initialization of the logistics transaction data DataSource is successfully carried out, all subsequent new and changed records are extracted to the BI system using the delta mechanism supported by the DataSource. 
  • The LO DataSources support ABR delta mechanism which is both DSO and InfoCube compatible. The ABR delta creates delta with after, before and reverse images that are updated directly to the delta queue, which gets automatically generated after successful delta initialization.
  • The after image provides status after change, a before image gives status before the change with a minus sign and a reverse image sends the record with a minus sign for the deleted records.
  • The type of delta provided by the LO DataSources is a push delta, i.e. the delta data records from the respective application are pushed to the delta queue before they are extracted to BI as part of the delta update. The fact whether a delta is generated for a document change is determined by the LO application. It is a very important aspect for the logistic DataSources as the very program that updates the application tables for a transaction triggers/pushes the data for information systems, by means of an update type, which can be a V1 or a V2 update. 
  • The delta queue for an LO DataSource is automatically generated after successful initialization and can be viewed in transaction RSA7, or in transaction SMQ1 under name MCEX<Application>.
Update Method
The following three update methods are available
  1. Synchronous update (V1 update) 
  2. Asynchronous update (V2 update) 
  3. Collective update (V3 update)
Synchronous update (V1 update)
  • Statistics updates is carried out oat the same time as the document update in the application table, means whenever we create a transaction in R/3, then the entries get into the R/3 table and this takes place in v1 update.
Asynchronous update (V2 update)
  • Document update and the statistics update take place in different tasks. V2 update starts a few seconds after V1 update and this update the values get into statistical tables from where we do the extraction into BW.
V1 and V2 updates do not require any scheduling activity.

Collective update (V3 update)
  • V3 update uses delta queue technology is similar to the V2 update. The main differences is that V2 updates are always triggered by applications while V3 update may be scheduled independently.
Update modes
  1. Direct Delta 
  2. Queued Delta 
  3. Unserialized V3 Update

Direct Delta
  • With this update mode, extraction data is transferred directly to the BW delta queues with each document posting. 
  • Each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues.  
  • In this update mode no need to schedule a job at regular intervals (through LBWE “Job control”) in order to transfer the data to the BW delta queues. Thus additional monitoring of update data or extraction queue is not require. 
  • This update method is recommended only for customers with a low occurrence of documents (a maximum of 10000 document changes - creating, changing or deleting - between two delta extractions) for the relevant application. 
Queued Delta
  • With queued delta update mode, the extraction data is written in an extraction queue and then that data can be transferred to the BW delta queues by  extraction collective run. 
  • If we use this method, it will be necessary to schedule a job to regularly transfer the data to the BW delta queues i.e extraction collective run. 
  • SAP recommends to schedule this job hourly during normal operation after successful delta initialization, but there is no fixed rule: it depends from peculiarity of every specific situation (business volume, reporting needs and so on).
Unserialized V3 Update
  • With this Unserialized V3 Update, the extraction data is written in an update table and then that data can be transferred to the BW delta queues by  V3 collective run.
Setup Table 
  • Setup table is a cluster table that is used to extract data from R/3 tables of same application. 
  • The use of setup table is to store your data in them before updating to the target system. Once you fill up the setup tables with the data, you need not go to the application tables again and again which in turn will increase your system performance.
  • LO extractor takes data from Setup Table while initialization and full upload. 
  • As Setup Tables are required only for full and init load we can delete the data after loading in order to avoid duplicate data. 
  • We have to fill the setup tables in LO by using OLI*BW or also by going to SBIW à  Settings for Application à Specific Data Sources à Logistics à Managing Extract Structures à Initialization à Filling in Setup tables à Application specific setup table of statistical data. 
  • We can delete the setup tables also by using LBWG code. You can also delete setup tables application wise by going to SBIW à Settings for Application à Specific Data Sources à Logistics à Managing Extract Structures à Initialization à Delete the Contents of the Setup Table. 
  • Technical Name of Setup table is ExtractStructure-setup, for example suppose Data source name 2LIS_11_VAHDR. And extract structure name is MC11VA0HDR then setup table name will be MC11VA0HDRSETUP.
LUW
  • LUW stands of Logical Unit of Work. When we create a new document it forms New image ‘N’ and whenever there is a change in the existing document it forms before image ‘X’ and after Image ‘ ‘ and these after and before images together constitute one LUW.
Delta Queue (RSA7)
  • Delta queue stores records that have been generated after last delta upload and not yet to be sent to BW. 
  • Depending on the method selected, generated records will either come directly to this delta queue or through extraction queue.
  • Delta Queue (RSA7) Maintains 2 images one Delta image and the other Repeat Delta. When we run the delta load in BW system it sends the Delta image and whenever delta loads and we the repeat delta it sends the repeat delta records.
Statistical Setup
  • Statistical Setup is a program, which is specific to Application Component. Whenever we run this program it extracts all the data from database table and put into the Setup Table.

InfoObjects

InfoObjects
  • InfoObjects are the smallest pieces in SAP BI . InfoObjects are used to describe business information and processes. For examples InfoObjects are: Customer Name, Region, Currency, Revenue, Fiscal year. 
  • There are five types of SAP BW InfoObjects:
    • Characteristics
    • Key figures
    • Unit characteristics
    • Time characteristics and
    • Technical characteristics.

Characteristics

  • Characteristics describe business objects in SAP BI like products, customers, employee, and attributes like color, material, company code.
  • They enable us to set select criteria during which we display required data.
Key figures
  • Key figures describe numeric information that are reported on in a query.
Unit characteristics
  • Unit characteristics provide a meaning of key figures values, stores currencies or units of measure (e.g., Currency Unit).

Time characteristics

  • Time characteristics describe time reference of business events.
  • They build the time dimension - obligatory part of InfoCube.
  • The complete time characteristics provided by SAP: 
    • calendar day (0CALDAY)
    • calendar week (0CALWEEK)
    • calendar month (0CALMONTH)
    • calendar quarter (0CALQUARTER)
    • calendar year (0CALYEAR)
    • fiscal year (0FISCYEAR) and 
    • fiscal period (0FISCPER). 
    • Incomplete time characteristics: CALMONTH2, 0CALQUART1, 0HALFYEAR1, 0WEEKDAY1, 0FISCPER3.
Technical characteristics
  • Technical characteristics have administrative purposes (e.g. Request ID).

Error calling number range object


Solution

1. Note down the InfoCube and Dimension name.

2. Go to T-Code: RSRV --> All Elementary Tests --> Transactional Data then double click on “Comparison of Number Range of a Dimension and Maximum DIMID” --> then click the same on the right side pane --> Mention the InfoCube name and Dimension name, click on Transfer button --> Click on top Correct Error.

 
 

PSA error record

Reason

It may happen some times that the incoming data to BW is having some incorrect format, or few records have few incorrect entries. 



Solution

1. Go to details tab, find its packet number and its record number.


2.  Click on PSA icon in monitor and select the error Data Packet


3. Double click on error record and edit it to the correct value, Select Save.

Now update from PSA to target by selecting the option Start update immediately. 
 

Time stamp error

Reason
  • The “Time Stamp” Error occurs when the Transfer Rules or Transfer Structure are internally inactive in the system.
  • They can also occur whenever the DataSources are changed on the R/3 side or the DataMarts are changed in BW side. In that case, the Transfer Rules is showing active status when checked. But they are actually not, it happens because the time stamp between the DataSource and the Transfer Rules are different.



Solution 

1. Go to RSA1 --> Source system --> Replicate DataSource



2. Run the program RS_TRANSTRU_ACTIVATE_ALL



3. Mention Source System and InfoSource and then execute.



Now the Transfer Structure will be automatically activated then proceed with the reload, it will get success now.

 

SAP BI Terminology

Info Area
  • Info Area is like “Folder” in Windows. InfoArea is used to organize InfoCubes, InfoObjects, MultiProviders, and InfoSets in SAP BW.

InfoObject Catalog 

  • Similar to InfoArea, InfoObject Catalog is used to organize the InfoObject based on their type. So we will have InfoObjects Catalogs of type Characteristics & KeyFigures.

Info Objects

  • It is the bsic unit or object in SAP BI  used to create any structures in SAP BI. 
  • Each field in the source system is referred as InfoObject on SAP BI.
  • We have 5 types of Info Objects: Characteristic, KeyFigure, Time Characteristic, Unit Characteristic, and Technical Characteristic.
Data Source
  • Data Source defines Transfer Structure.
  • Transfer Structure indicates what fields and in what sequence are they being transferred from the source system. 
  • We have 4 types of data source:
    • Attr: used to load master data attr 
    • Text: Used to load text data 
    • Hier: used to load hierarchy data 
    • Transcation data: used to load transaction data to Info cube or ODS.
Source System
  • Source system is an application from where SAP BW extracts the data. 
  • We use Source system connection to connect different OLTP applications to SAP BI.
  • We have different adapters / connectors available:
    • SAP Connection Automatic
    • SAP Connection Manually 
    • My Self Connection
    • Flat file Interface
    • DB connect
    • External Systems with BAPI
Info Package
  • Info package is used to schedule the loading process. 
  • Info package is specific to data source. 
  • All properties what we see in the InfoPackage depends on the properties of the DataSource.


Extended Star Schema

Extended Star Schema

  • The BW extended star schema differs from the basic star schema, in case of extended star schema, we will have Fact table connected to the Dimension table and the Dimension table is connected to the SID table and SID table is connected to the master data tables.
  • Fact Table and Dimension table will be inside the cube.
  • SID table and Master data tables are outside the cube.
  • One Fact table can get connected to 16 Dimension tables, one Dimension table can be assigned with maximum of 248 SID tables (248 characteristics).
  • When we load Transaction data into InfoCube, System generates DIM ID based on the SID’s and uses the Dim ID’s in the Fact Table.
  • Each Characteristic can have its own master data tables (ATTR, TEXT, HIER). Attribute Table is used to store all the attribute data, Text table is used to store the description in multiple languages, Hier table is used to store the Parent-Child data.
 
  
Fact Table   

  • Fact Table will have Dimension ID’s and Key figures. 
  • Maximum DIM ID’s – 16 
  • Maximum KeyFigure – 233 
  • The Dimension ID’s in the Fact Table is connected to the Dimension Table. 
  • Fact Table must have at least one Dimension ID.
Dimension Table  

  • Dimension Table contains Dimension ID and SID columns. 
  • One column is used for Dimension ID.
  • We have maximum of 248 SID Columns.
  • We can assign maximum of 248 characteristics to one dimension.
 

Star Schema

Star Schema

  • InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.   
  • An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a large fact table that contains the key figures for the InfoCube, as well as several dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.  
  • The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity at which the key figures are stored in the InfoCube.  
  • Characteristics that logically belong together are grouped together in a dimension. Dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance as it decouples the master data from any specific InfoCube. The master data can be used at a time by multiple InfoCubes. This InfoCube structure is optimized for data analysis.  
  • The fact table and dimension tables are both relational database tables.  
  • Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.  
  • You can create aggregates to access data quickly. Here, the InfoCube data is stored redundantly and in an aggregated form.  
  • You can either use an InfoCube directly as an InfoProvider for analysis and reporting, or use it with other InfoProviders as the basis of a MultiProvider or InfoSet.
Fact Table

  • The fact data are stored in a highly normalized fact table.
  • In a star schema, typically the fact table is very large with small dimensional tables.
  • The fact tables has a relatively small number of columns (attributes) and a large number of rows (records) where associated dimension tables to have a large number of columns (attributes) and small number of rows.
Dimension Table

  • Dimension data are stored in dimension table.
  • Dimension table link to the fact table has a group of similar characteristics. For example, a customer dimension table may contain three characteristics: customer name, address and sales organization. There will be one customer dimension record for each unique combination of these three values.  For example, each record in customer dimension may represent a specific customer.

 

Limitations of Star Schema

 

  • In Case of star schema, Master data is stored inside the cube. So Master data cannot be reused in other cubes. 
  • Since all the tables inside the cube contains Alpha-numeric data, it degrades query performance. Because processing of numeric’s is much faster than processing of alphanumeric.
  • In case of Star schema, we are limited to only 16 dimensions.

OLAP and OLTP

Online Transaction Processing (OLTP) refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing.

On Line Analytical Processing (OLAP), a series of protocols used mainly for business reporting. Using OLAP, businesses can analyze data in all manner of different ways planning, simulation, data warehouse reporting, and trend analysis.

OLAP and OLTP are two absolutely different systems since they have different purpose and environments. OLAP for analytical compare to OLTP for transactional.

Difference between OLAP and OLTP

  • Target 
    OLTP is used in operative environment to get efficiency through automation of business processes. OLAP is used in informative environment, usually used by management to support in decisions making.
  • Priorities 
    As transactional system, OLTP has high availability and higher data volume.OLAP as analytical system is very simple data and has flexible data access.
  • Level of detail 
    OLTP stores data in a very high level of detail, whereas OLAP stores data in aggregation.
  • Age of data 
    OLTP data are current data. It means the data stored in OLTP with minimal history. OLAP data are historical data.
  • Database operation 
    Frequent data changes are a feature of operative system. So, in OLTP system we can read, add, change, delete or refresh data. In OLAP, we only can read the data since they are frozen after a certain point for analysis purpose.
  • Integration of data from various applications (system) 
    Since the OLTP system is for operation, it has minimal integration with other applications. In contrast to the OLTP system, OLAP need high integration of information from many application or system because it used for analysis.
  • Normalization in database 
    Due to reduction in data redundancy, normalization is very high requirement in OLTP. In OLAP, typically de-normalized with fewer tables; use of extended star schema and lower performance.

Enterprise Resource Planning (ERP)

ERP stands for Enterprise Resource Planning.  ERP is a way to integrate the data and processes of an organization into one single system. Today's ERP systems can cover a wide range of functions and integrate them into one unified database.

For example functions such as   Human Resources, Supply Chain Management, Customer Relations Management, Financials, Manufacturing functions and Warehouse Management functions were all once stand alone software applications, usually housed with their own database and network, today, they can all fit under one umbrella - the ERP system. 

There are many advantages of implementing an EPR system:

  • A totally integrated system 
  • The ability to streamline different processes and workflows
  • The ability to easily share data across various departments in an organization
  • Improved efficiency and productivity levels Better tracking and forecasting 
  • Lower costs Improved customer service 
 

Introduction to SAP R/3

SAP R/3 is SAP's integrated software solution for client/server and distributed open systems.

The letter R stands for real-time, and 2 and 3 represent two-tiered and three-tiered architectures, respectively. SAP R/2 is for mainframes only, whereas SAP R/3 is three-tiered implementation using client/server technology for a wide range of platforms-hardware and software.  When implementing a Web front-end to an SAP R/3 implementation, the three-tiered architecture becomes multi-tiered depending on how the Web server is configured against the database server or how the Web server itself distributes the transaction and presentation logic.

SAP R/3's multi-tiered architecture enables its customers to deploy R/3 with or without an application server. Common three-tiered architecture consists of the following three layers:
  • Data Management 
  • Application Logic
  • Presentation
The Data Management layer manages data storage, the Application layer performs business logic, and the Presentation layer presents information to the end user.

Most often, the Data Management and Application Logic layers are implemented on one machine, whereas workstations are used for presentation functions. This two-tiered application model is suited best for small business applications where transaction volumes are low and business logic is simple.

When the number of users or the volume of transactions increases, separate the application logic from database management functions by configuring one or more application servers against a database server. This three-tiered application model for SAP R/3 keeps operations functioning without performance degradation. Often, additional application servers are configured to process batch jobs or other long and intense resource-consuming tasks.

 

Versions of SAP BW/BI

In 1997, the first version of SAP product for reporting, analysis and data warehousing was launched and the product was termed as "Business Warehouse Information System".

Evolution of SAP BW/BI

Name
Version
Release
BIW
1.2A
Oct-1998
BIW
1.2B
Sep-1999
BIW
2.0A
Feb-2000
BIW
2.0B
Jun-2000
BIW
2.1C
Nov-2000



Name Change, BIW to BW


BW
3.0A
Oct-2001
BW
3.0B
May-2002
BW
3.1
Nov-2002
BW
3.1C
Apr-2004
BW
3.3
Apr-2004
BW
3.5
Apr-2004



Name Change, BW to BI


BI
7
July-2005


BIW    : Business Warehouse Information System.
BW     : Business Warehouse
BI       : Business Intelligence.
 

Introduction to SAP BI

SAP Netweaver Business Intelligence (SAP BI) is the name of the Business Intelligence, analytical, reporting and Data Warehousing (DW) solution which is one of the major enterprise software applications produced by SAP AG.

SAP BI consists among other things of components for data management (Data Warehousing Workbench), extensive data modeling capabilities, an embedded analytical engine, a suite of rich front-end analytical tools referred to as Business Explorer (BEx), and operational tools used for importing the most current transactional data into the system.

It may be helpful to consider layers that make up the structure of SAP's BI solution:

  • Extraction, Transformation and Load (ETL) layer: responsible for extracting data from a specific source, applying transformation rules, and loading it into SAP BW system.
  • Data warehouse area: responsible for storing the information in various types of structures (e.g. Data Store Objects, InfoObjects and multidimensional structures called Info Cubes).
  • Reporting: responsible for accessing the information in data warehouse area (and directly in source systems using virtual InfoProviders) and presenting it in a user-friendly manner to the analyst or business user.
  • Planning: Provides capabilities for the user to run simulations and perform tasks such as budget calculations
Purpose of Business Intelligence
During all business activities, companies create data. In all departments of the company, employees at all levels use this data as a basis for making decisions. Business Intelligence (BI) collates and prepares the large set of enterprise data. By analyzing the data using BI tools, you can gain insights that support the decision-making process within your company. BI makes it possible to quickly create reports about business processes and their results and to analyze and interpret data about customers, suppliers, and internal activities. Dynamic planning is also possible. Business Intelligence therefore helps optimize business processes and enables you to act quickly and in line with the market, creating decisive competitive advantages for your company.




Introduction to SAP

SAP was founded in 1972 in Walldorf, Germany by five former IBM employees: Dietmar Hopp, Hans-Werner Hector, Hasso Plattner, Klaus E. Tschira, Claus Wellenreuther. It stands for Systems, Applications and Products in Data Processing. Over the years, it has grown and evolved to become the world premier provider of client/server business solutions for which it is so well known today. The SAP R/3 enterprise application suite for open client/server systems has established new standards for providing business information management solutions.

SAP product is considered excellent but not perfect.  The main problems with software product are that it can never be perfect.

The main advantage of using SAP as your company ERP system is that SAP has a very high level of integration among its individual applications which guarantee consistency of data throughout the system and the company itself.

In a standard SAP project system, it is divided into three environments, Development, Quality Assurance and Production.

The development system is where most of the implementation work takes place. The quality assurance system is where all the final testing is conducted before moving the transports to the production environment.  The production system is where all the daily business activities occur.  It is also the client that all the end users use to perform their daily job functions.

To all company, the production system should only contain transport that has passed all the tests.

SAP is table drive customization software.  It allows businesses to make rapid changes in their  business requirements with a common set of programs.  User-exits are provided for business to add in additional source code.  Tools such as screen variants are provided to let you set fields attributes whether to hide, display and make them mandatory fields.

SAP are categorized into 3 core functional areas:

Logistics 
  • Sales and Distribution (SD)
  • Material Management (MM)  
  • Warehouse Management (WM) 
  • Production Planning (PP) 
  • General Logistics (LO) 
  • Quality Management (QM)
Financial
  • Financial Accounting (FI) 
  • Controlling (CO) 
  •  Enterprise Controlling (EC)  
  • Investment Management (IM)  
  • Treasury (TR) 
Human Resources 
  • Personnel Administration (PA)  
  • Personnel Development (PD)
 
free counters