<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Performance Measurement Framework with Indicator Life-Cycle Support</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aivars NIEDRITIS</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Janis ZUTERS</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laila NIEDRITE</string-name>
          <email>Laila.Niedrite@lu.lv</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of</institution>
          <country country="LV">Latvia</country>
        </aff>
      </contrib-group>
      <fpage>115</fpage>
      <lpage>127</lpage>
      <abstract>
        <p>The performance measurement method introduced in this paper is based on the five-step indicator lifecycle that covers formal definition of indicators, measurement, analysis, reaction, and reformulation of indicator definitions. Performance measurement framework is introduced that support this performance measurement method and that enables the indicator lifecycle. The goal of this research is to provide a method for performance measurement that ensures timely and to given context appropriate decision making process. For the purposes of storing the information necessary for decision support a data warehouse is used as a component of the process measurement framework.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>Performance measurement</kwd>
        <kwd>key performance indicators</kwd>
        <kwd>indicator lifecycle</kwd>
        <kwd>data warehouse</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Effective organization of business processes ensures the achievement of institution’s
goals. Performance measurement compares the measurement results with target values
to discover the progress. An important aspect is how to choose appropriate measures
and how to define an appropriate measurement framework.</p>
      <p>
        Performance measures [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] are indicators used by management to measure, report
and improve the performance in an organization. What kind of particular performance
measures are used is influenced by management models of organizations and
measurement perspectives of these models. For example, BSC [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] defines four
measurement perspectives: Financial, Customer, Internal Process, and Learning and
Growth, other approaches add more perspectives, for example,
Environment/Community and Employee Satisfaction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        To perform effective measurement and adequate reaction on discovered situations,
not only different perspectives, but also different aspects (e.g. connection to success
factors, reporting, reaction, responsibilities) of performance indicators could be
modeled and documented. In our previous research [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] we investigated the features of
indicators and grouped them according to the indicator life-cycle. This concept helps to
support appropriate usage of indicators according the values of the features.
      </p>
      <p>To implement performance measurement according to the strategies of the
company and some management model, the companies develop and use measurement
systems. A data warehouse is an option to build a Performance Measurement System.
An advantage of using a data warehouse for the implementation of a performance
measurement system is the possibility to use existing infrastructure of the company’s
data warehouse.</p>
      <p>
        Traditionally data warehouses store customer and financial indicators of the
companies, but other perspectives are typically not covered. Some attempts to integrate
the perspective of internal business processes into a data warehouse have been made in
[
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ].
      </p>
      <p>We do not try to incorporate another perspective of measurements into the data
warehouse. We propose to use the data warehouse as an integral part of a performance
measurement system, which can store indicators of different perspectives and can be
used according to proposed measurement framework.</p>
      <p>The measurement framework describes different measurement aspects to bring
order within this important undertaking of the organization. Thereby, the quality of
measurement is improved, for example, by means of performing the analysis of right
indicators in right time and undertaking the right actions as a result.</p>
      <p>The usage of an existing data warehouse gives additional advantage to the
performance measurement. The analysis of indicator values can be performed using
existing OLAP tools, reports and dashboards.</p>
      <p>We start with related work described in section 1. Section 2 explains the concept of
indicators life-cycle that forms the basis for proposed measurement framework. In
section 3 the reporting tool and its metadata is described that is one of the ready-made
data warehousing components used within the measurement framework. The
architecture of performance measurement system is given in section 4. In section 5 the
conclusions are given.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Related Work</title>
      <p>Performance measurement systems implemented by means of a data warehouse are
given in several works. The existing approaches concentrate mostly on how to build an
appropriate dimensional model of the data warehouse according the process perspective
of measures to be stored.</p>
      <p>
        The Process Data Warehouse [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] stores histories of engineering processes and
products for experience reuse. The Performance Management System (PMS) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] stores
financial and non-financial data centrally. The PMS contains values of measurements
as well as supplementary information about company structure, business processes,
goals and performance measures. Besides traditional data warehousing perspectives the
process perspective is also analyzed. In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the authors propose a Corporate
Performance Measurement System (CPMS), where process performance data is
integrated with institution’s data warehouse. Log files of a workflow system are used
as data sources. The model of CPMS is developed as a part of an existing data
warehouse model of the company.
      </p>
      <p>
        A category of data warehouses for performance measurement can be distinguished,
where the business process execution data is stored. The systems already mentioned
use workflow data as one of data sources, but workflow data warehouse [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] represents
the concept of Data Warehouse of Processes. The authors of Workflow Data
Warehouse [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] argue why and when data warehouse can become an appropriate
solution for storing and analyzing log files of process execution.
      </p>
      <p>
        Methodologies how the performance should be evaluated also are a subject of
research. For example, methodology [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], based on dynamic process performance
evaluation, proposes measurement models for analysis of different process flows in
order to control the quality of process execution. Activity flow, information flow,
resource flow and others are measured using time, quality, service, cost, speed,
efficiency, and importance as evaluation criteria.
      </p>
      <p>Our approach uses the advantages of an existing data warehouse – ETL processes,
analysis tools, data storage schemas – that allow to prepare and store indicators
according to the different perspectives, as well as integrates the data warehouse with a
performance measurement framework that is based on the life-cycle of indicators,
which ensures the quality of the performance measurement by supplying necessary
information for each measurement task.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Indicators and Their Life-Cycle</title>
      <p>
        In our previous research [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] we defined a lifecycle of indicators, which consists of five
steps – indicator definition, measurement, analysis, reaction and improvement. In each
step an indicator is characterized by a different set of properties.
      </p>
      <p>Indicator definition step describes the information needs of the user. In the
measurement step the indicators get the values. The analysis step represents the process,
when indicators are used to make decisions. The reaction step represents the
implementation of decisions. The life-cycle ends with the evaluation of indicator
definitions and predefined values of indicator properties during the improvement step.</p>
      <sec id="sec-3-1">
        <title>2.1. Groups of Indicator Aspects</title>
        <p>
          The properties of indicators are grouped in aspects according to the particular step
(Figure 1). The explanation of the meaning of properties can be found in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>TYPE
KRI, RI
KPI, PI</p>
        <p>TIME
•Orientation
(Past, Current ,
Future)</p>
        <sec id="sec-3-1-1">
          <title>Definition aspects of indicators</title>
          <p>DATA SOURCE
•Existing
•To be gathered</p>
          <p>LEVEL
• Organization
• Individual</p>
          <p>Indicator
PROCESS
(for PI and</p>
          <p>KPI)</p>
          <p>Indicator</p>
          <p>TIME
• Frequency
Measurement aspects
of indicators</p>
          <p>PERSPECTIVE
• Customer
• Finance
• Internal Processes
• Learning and Growth
• Environment /Community
• Employee satisfaction</p>
          <p>SUCCESS FACTORS
• Success factor
• Critical success factor</p>
          <p>MODEL
• Indicator dependencies
• Indicator hierarchy
• Base / derived measures</p>
          <p>METHOD
• Automatically/calculated
• Manual/surveys</p>
          <p>REPORTED TO
•Management
•Responsible for processes</p>
          <p>TIME
•Reporting time</p>
          <p>ANALYST
Analysis aspects of</p>
          <p>indicators</p>
          <p>PROCESS
• Rebuild
• Improved</p>
          <p>TIME
• Reaction time
Reaction aspects of
indicators</p>
          <p>Indicator
Indicator</p>
          <p>TARGET VALUE
• Decision criteria</p>
          <p>ANALYSIS MODEL
• Analytical functions
• Representation</p>
          <p>ACTIONS
• To be performed</p>
          <p>RESPONSIBLE FOR
• Reaction
VALUES OF ASPECTS
• To be changed
Improvement aspects of indicators</p>
          <p>Indicator</p>
          <p>
            DEFINITION
• To be adjusted
• Accepted/Rejected
One of the questions raised by the proposed measurement framework with
indicators as the central element of interest was how the indicators should be
formalized to bring the maximum of clarity into the measurement process – what, why
and how is measured. Our previous research [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] is focused on the formal definition of
indicators. The formalization method of sentences that expresses the indicators was
proposed.
2.2. Formal Model for Indicator Definition
On one hand, indicators are the focus of data analysis in the measurement process. On
the other hand, data warehousing models are built to represent the information needs
for data analysis. Therefore we could talk about indicators as an information
requirement for a data warehouse system.
          </p>
          <p>
            The type of an information system to be developed has some impact on a way of
formulating sentences that express requirements. We assumed that requirements for
data warehouses and information requirements particularly have a similar structure or
pattern. We based the proposed model on the structure evaluation of the sentences that
formulate performance indicators taken from the performance measures database [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ].
          </p>
          <p>
            All indicators have common structure, for that reason it is possible to determine a
pattern for re-writing business requirements formally. The requirement formalization
may be represented as a metamodel. The detailed description of the metamodel and the
algorithm how the sentences expressing the indicators are reformulated according to
the given metamodel can be found in [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ].
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Reporting Tool and its Metamodel</title>
      <p>
        One important and integral part of our process measurement framework is a reporting
tool developed at the University of Latvia. This reporting tool is developed as the part
of the data warehouse framework [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The reporting tool is based on metadata and in
the latest version it has five metadata layers (Figure 2) that describe different aspects of
defining and storage of a data warehouse schemata as well as defining and operating
reports defined on these schemata.
      </p>
      <p>Semantic, Logical and Physical metadata describe the data warehouse schemata in
different levels of abstraction, starting from the business understanding of the schema
elements, describing it by means of OLAP concepts at logical level and ending with the
physical storage of the data warehouse tables. OLAP Preferences metadata is
introduced to describe the user preferences on reports’ structure and data and is used
for OLAP personalization purposes. Reporting metadata contains definitions of reports
on data warehouse schemata.</p>
      <p>OLAP
Preferences
Metadata</p>
      <p>Semantic Metadata
Logical Metadata</p>
      <p>Physical Metadata</p>
      <p>
        Metadata levels are interconnected by associations between particular classes of
metadata. In the context of this research, the Logical and Reporting metadata are of
particular interest; so both the levels as well as the connections between them will be
described in more detail here. Detailed description of the rest of the metadata levels can
be found in our previous research [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
      </p>
      <sec id="sec-4-1">
        <title>3.1. Logical Level Metadata</title>
        <p>
          The logical level metadata describes the data warehouse schema from the
multidimensional viewpoint (Figure 3) and mostly is based on the OLAP package of
Common Warehouse Metamodel (CWM) [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Therefore it contains the core concepts
of OLAP – dimensions and fact tables (cubes in CWM).
        </p>
        <p>Fact tables contain measures, but dimensions consist of attributes and hierarchies
built from hierarchy levels. Fact tables and dimensions have FactTableDimension
association. Only dimensions and fact tables having FactTableDimension associations
can be used simultaneously in one report. More about connections with reporting
metadata is given in section 3.3.</p>
        <p>The standard OLAP package of CWM is extended by the class
AcceptableAggregation that allows only meaningful definitions of aggregate functions
(e.g. SUM, AVG) for each measure and dimension. This metadata is used to ensure
correct queries by the reporting tool.</p>
        <p>1..*</p>
        <p>FactTable
-Name
-Description</p>
        <p>1
1..*</p>
        <p>Measure
-Name
-Description
1 Schema
-Name
-Description
-FactTableDimension</p>
        <p>1
0..*
1..*</p>
        <p>AcceptableAggregation
-Aggregation
0..*
1..*
1..*</p>
        <p>Dimension
-Name
-Description
-IsTimeDimension
1..*</p>
        <p>Attribute
-Name
-Description
-corresponds
1
1</p>
        <p>Level
-Name
0..* -Description</p>
        <p>0..*
* 1 Hierarchy</p>
        <p>-Name
-{ordered} -Description</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Reporting Level Metadata</title>
        <p>Reporting metadata describes the structure of reports (Figure 4). In the meaning of this
model, reports are worksheets. Worksheets contain data items defined by calculations.
Calculations in their turn specify formulas containing parameters and table columns
that correspond to schema elements of the underlying data warehouse. Reports also are
based on joins between tables and may have user-defined conditions.</p>
        <p>Reports in the tool are created by choosing desired elements of a data warehouse
schema and defining conditions, parameters etc. Only measures and attributes
belonging to one schema could be included in the definition of one report.
Subquery
-QueryText</p>
        <p>Constant
-Value
ConditionObject</p>
        <p>ConditionSet
-Formula
-IsVersionCondition
1
*</p>
        <p>1
3.3. Connections between Logical and Reporting Metadata
The models of logical and reporting metadata are interrelated. Report items are defined
by formulas from calculation parts. If a calculation part corresponds to a particular
dimension attribute or measure, then this schema element from Logical metadata is
connected to the class CalculationPart by the association ‘corresponds’ in the
reporting metadata.
4. Construction of Performance Measurement Framework
We propose an approach of building Performance measurement systems by
substantially exploiting existing data warehouse technologies.</p>
        <p>
          The proposed Performance measurement framework is grounded on the following
principles of design and operation:
• processing of indicator information is performed in conformance with the
lifecycle of indicators and formal indicator metamodel, defined in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ];
• measurement data are obtained through an ETL process and stored in a data
warehouse;
• indicator analysis aspect is provided by using a ready-made data warehouse
reporting tool extensively both for obtaining actual value from measurement
data and providing users with detailed reports.
4.1. Architecture of Performance Measurement System
The kernel of the performance measurement framework (Figure 5) consists of
performance management component and the indicator life-cycle support database, as
well as the dashboard module.
        </p>
        <p>Indicator life-cycle support database (detailed information is given in the next
section) stores links to the formal definitions of indicators from the Indicator formal
definition database, which is built according to the formal model for indicator
definition described earlier in section 2.2. These indicator definitions are collected and
formalized during the requirements gathering process for obtaining the precise and
appropriate indicators for process measurement, as well as for documenting the
information requirements of a data warehouse.
Performance management
component</p>
        <p>Reporting tool</p>
        <p>Indicator editor is an administrative tool and is meant for two purposes: (1) to
establish the links between the Indicator layer of the system and the Indicator formal
definition database and (2) to configure the Indicator measurement (or ETL) metadata.</p>
        <p>The measurement process, during which indicators get their values, is performed
through the ETL processes, which use corresponding ETL metadata for measurements.
The ETL component is an external part of the performance measurement framework. In
the context of this research we assume that a set of procedures is defined for
performing the data warehouse data renewal according to the values of ETL metadata
for measurements (e.g. according to the planned timing schedule). During the ETL
processes data from external Data sources are processed and loaded into the data
warehouse that represents in our framework the Measurement data.</p>
        <p>The remaining part of the data warehouse layer of the proposed framework is the
Data warehouse and reporting metadata component that is developed according the
previously described metadata layers in sections 2.1 and 2.2 that describe respectively
the logical level of data warehouse schema and the reporting metadata.</p>
        <p>Performance management component is the main part of the framework that is
provided to coordinate the monitoring of business processes by analyzing the
measurement results of indicators. Component is based on descriptions of different
properties of indicators that are stored in the Indicator life-cycle support database and
that allow the user to analyze the indicator values in the most appropriate way by
means of two other components of the framework – Dashboard module and the
Reporting tool.</p>
        <p>
          The Dashboard module visualizes the most important values of indicators
comparing them to the stored target values of indicators. The reporting tool provides
more detailed information to the user by calling predefined reports linked to particular
indicator definition. An existing reporting tool is used, which is built according to the
previous mentioned reporting metadata (more information about this tool can be found
in [
          <xref ref-type="bibr" rid="ref12 ref14">14, 12</xref>
          ].
4.2. Indicator Life-Cycle Support Database
Indicator life-cycle support database spins around the ‘Indicator life-cycle support
metadata’ (Figure 6), which define the behavior of the framework. These metadata are
used by Performance management component designed to coordinate the workflow of
the indicator life-cycle.
        </p>
        <p>Indicator formal
definition database</p>
        <sec id="sec-4-2-1">
          <title>ETL metadata for measurements</title>
          <p>IndicatorMeasurement
Indicator
TimingSchema</p>
          <p>Link to ETL DW metadata
IndicatorDefinition
Indicator
Perspective={cust,fin,...}
SuccessImpact
Level={org,indiv}
TimeOrientation</p>
          <p>IndicatorType={KRI,RI,KPI,PI}
Indicator life-cycle support
metadata
Indicator life-cycle support
execution data</p>
          <p>Indicator
IndicatorName
FormalMetamodel
Definition
Measurement
Analysis
Reaction</p>
          <p>1..n</p>
          <p>Notification
User
Indicator
NotificationTime
Status={unprocessed,delayed,processed}
Message
ReportConfig
RequiredReactionTime
ReactionTime</p>
          <p>Dashboard module
1..n</p>
          <p>IndicatorAnalysis
Indicator
TargetValue
ActionValueDefinition
DecisionOperator={=,&lt;,&gt;,...}
TimingSchema
MessageTemplate
ReportDefinition</p>
          <p>IndicatorReaction
Indicator
Type={none,react}
TimingSchema
ResponsibleUser
ActionToPerform
Performance management component</p>
          <p>Reporting tool</p>
          <p>As the duty of performing measurements is fully assigned to the data warehouse,
ETL metadata are prepared and stored separately from Indicator life-cycle support
database. Actually, this is one of the key points of the framework to fully connect data
warehouse for such functionality.</p>
          <p>Workflow status is stored in the ‘Indicator life-cycle support execution data’ and is
accessible directly by users via Dashboard module. Workflow status is controlled both
by Performance management component and by Dashboard module. It incorporates
information about notifications by the system sent to users and the reaction of users to
them.
4.3. Indicator Life-Cycle Steps in Performance Measurement Framework
According to the indicator life-cycle definition in section 2, Performance measurement
framework should support all five steps of the life-cycle. This section is to describe the
proposed framework according to the life-cycle steps.</p>
          <p>Indicator life-cycle
support metadata
set metadata</p>
          <p>Definition
Measurement</p>
          <p>Analysis</p>
          <p>Reaction</p>
          <p>Improvement
Indicator life-cycle</p>
          <p>LEGEND</p>
          <p>ETL
metadata</p>
          <p>ETL
Measurement
data
analyze
react
Notification
Reaction</p>
          <p>Data
sources</p>
          <p>Data warehouse
actual
value
3. User’s reaction is obtained from the Dashboard module (Figure 9) and can be
of two types:
• A request for the detailed notification. Reporting tool is used here to
obtain a report that describes the actual measurement in detail;
• Reaction. If the description of an indicator provides for a response to the
notification, user is required to assert this in time and in a special way.
4. In control step Performance management component checks whether users
have responded to the notifications, if such reactions were appointed in the
analysis step (Figure 10).</p>
          <p>The above described processing of indicator data by Performance measurement
framework is performed in conformance with the life-cycle of indicators.</p>
          <p>Table 1 shows mapping between the indicator life-cycle and its implementation by
the Performance measurement framework.
Indicator life-cycle
aspect group
1. Definition</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>2. Measurement</title>
    </sec>
    <sec id="sec-6">
      <title>3. Analysis</title>
    </sec>
    <sec id="sec-7">
      <title>4. Reaction</title>
    </sec>
    <sec id="sec-8">
      <title>5. Improvement</title>
      <p>Description of implementation by the performance
measurement framework
Indicator definition is described in indicator formal
definition database, as well as Indicator life-cycle support
database. Indicator definition includes preparation of the
metadata required to ensure the whole process.</p>
      <p>Measurement process is fully delegated to the data
warehouse and its appropriate ETL process.</p>
      <p>Analysis is coordinated by Performance management
component. Measurement data are processed and displayed
to the users.</p>
      <p>A user reads and, if required, reacts to the notification.</p>
      <p>Performance management component controls the reaction.</p>
      <p>Indicator improvement technically matches indicator
definition.
Procedure react
Begin</p>
      <p>Foreach user</p>
      <p>Display all from Notification in the dashboard Where User = user
Foreach notification From Notification Where User = user Do</p>
      <p>Wait for user action Do</p>
      <p>Case user asks to show detalized information Do</p>
      <p>run report according to notification.ReportConfig and display it
Case performs an action according to report.Indicator.Reaction.ActionToPerform Do</p>
      <p>notification.ReactionTime = current time
4.4. Integration with Data Warehouse Components
ETL metadata for measurements (IndicatorMeasurement class) is a part of Indicator
life-cycle support database (Figure 6). The Indicator attribute identifies a particular
attribute that is measured, whereas TimingSchema describes the time parameters of
measurement (e.g. frequency, exact starting time). The last attribute – ETLprocess –
points to the data warehouse meatadata repository, particulary to the ETL metadata part
of the repository that describes mappings between the source and data warehouse
schemas. This metadata also contains calls to corresponding procedures that implement
these mappings and necessary data transformations. For the proposed measurement
framework we can assume that the IndicatorMeasurement class contains the procedure
call that renews the data warehouse data schema that contains data necessary for
calculation of the given indicator.</p>
      <p>The IndicatorAnalysis class of Indicator life-cycle support database (Figure 6) and
its ReportDefinition attribute is planned to be a pointer to the report definition stored in
accordance to the metamodel of the reporting tool.</p>
      <p>Reporting metadata (Figure 4) contain the Worksheet class that identifies a
particular report that can be invoked when analysis of measurement results is
performed. The report can be simple, when one particular value is retrieved to compare
it with a target value, or complex, when the report is used for the detailed analysis. The
complexity of the report depends on the definition of the particular report.</p>
    </sec>
    <sec id="sec-9">
      <title>5. Conclusions</title>
      <p>Using data warehouses in performance measurement systems has been already
extensively explored. The proposed Performance measurement framework has been
designed to obtain the maximum benefits from matured data warehouses technologies
in implementing indicator life-cycle support.</p>
      <p>The applied model of indicator life-cycle serves as a theoretical means of quality
assurance for the performance measurement. The use of data warehouses as integral
part of the framework covers two important aspects of ensuring the indicator life-cycle:
(a) indicator measurement, and (b) part of indicator analysis (performed by Reporting
module).</p>
      <p>The provided method for performance measurement ensures timely and to given
context appropriate decision making process. The indicator life-cycle support database
stores metadata that define and schedule the measurement and control processes of
indicators, including timing schemas, responsibilities and actions to be performed. The
proposed framework provides the option to build performance control on the activities
initiated from the side of the measurement system, as soon as the system recognizes the
problem and so the need for more detailed analysis.</p>
      <p>Preliminary works of implementing the framework are already in progress, so we
expect the first experimental results in the near future.</p>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgments</title>
      <p>This work has been supported by European
No. 2009/0216/1DP/1.1.1.2.0/09/APIA/VIAA/044.
Social</p>
      <p>Fund
(ESF)
project</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Parmenter</surname>
          </string-name>
          , Key Performance Indicators: Developing, Implementing, and
          <article-title>Using Winning KPIs</article-title>
          . Second ed. Jon Wiley &amp; Sons,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Norton</surname>
          </string-name>
          , The Balanced Scorecard. Harvard Business School Press,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Niedritis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Niedrite</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Kozmina</surname>
          </string-name>
          ,
          <article-title>Performance measurement framework with formal indicator definitions</article-title>
          . In: J.
          <string-name>
            <surname>Grabis</surname>
          </string-name>
          , M. Kirikova, editors,
          <source>Perspectives in Business Informatics Research, Lecture Notes in Business Information Processing</source>
          <volume>90</volume>
          (
          <year>2011</year>
          ), Springer, Berlin,
          <fpage>44</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>List</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Machaczek</surname>
          </string-name>
          ,
          <article-title>Towards a corporate performance measurement system</article-title>
          .
          <source>In: Proceedings of the ACM Symposium SAC'04</source>
          , ACM Press,
          <year>2004</year>
          ,
          <fpage>1344</fpage>
          -
          <lpage>1350</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jarke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>List</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Koller</surname>
          </string-name>
          ,
          <article-title>The challenge of process data warehousing</article-title>
          .
          <source>In: Proceedings of the 26th International Conference VLDB'</source>
          <year>2000</year>
          ,
          <year>2000</year>
          ,
          <fpage>473</fpage>
          -
          <lpage>483</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kueng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wettstein</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>List</surname>
          </string-name>
          ,
          <article-title>A holistic process performance analysis through a process data warehouse</article-title>
          .
          <source>In: Proceedings of the 7th American Conference on Information Systems (AMCIS'01)</source>
          ,
          <year>2001</year>
          ,
          <fpage>349</fpage>
          -
          <lpage>356</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bonifati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Casati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Dayal</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Shan</surname>
          </string-name>
          ,
          <article-title>Warehousing workflow data: challenges and opportunities</article-title>
          .
          <source>In: Proceedings of the 27th International Conference VLDB'</source>
          <year>2001</year>
          ,
          <year>2001</year>
          ,
          <fpage>649</fpage>
          -
          <lpage>652</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>A business process intelligence system for enterprise process performance management</article-title>
          ,
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          <volume>38</volume>
          (
          <issue>6</issue>
          ) (
          <year>2008</year>
          ),
          <fpage>745</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Solodovnikova</surname>
          </string-name>
          ,
          <article-title>Data warehouse evolution framework</article-title>
          .
          <source>In: Proceedings of the Spring Young Researcher's Colloquium On Database and Information Systems (SYRCoDIS'07)</source>
          , Moscow, Russia,
          <year>2007</year>
          . Available from: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>256</volume>
          /submission_4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kozmina</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Solodovņikova</surname>
          </string-name>
          ,
          <article-title>Determining preferences from semantic metadata in OLAP reporting tool</article-title>
          . In: L.
          <string-name>
            <surname>Niedrite</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Strazdina</surname>
          </string-name>
          , B. Wangler, editors,
          <source>Perspectives in Business Informatics Research</source>
          ,
          <source>Local Proceedings of the 10th International Conference BIR</source>
          <year>2011</year>
          ,
          <article-title>Associated Workshops</article-title>
          and
          <string-name>
            <given-names>Doctoral</given-names>
            <surname>Consortium</surname>
          </string-name>
          , Riga Technical University,
          <year>2011</year>
          ,
          <fpage>363</fpage>
          -
          <lpage>370</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Solodovnikova</surname>
          </string-name>
          ,
          <article-title>Metadata to support data warehouse evolution</article-title>
          .
          <source>In: Proceedings of the 17th International Conference on Information Systems Development (ISD'08)</source>
          , Paphos, Cyprus,
          <year>2008</year>
          ,
          <fpage>627</fpage>
          -
          <lpage>635</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Solodovnikova</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Niedrite</surname>
          </string-name>
          ,
          <article-title>Evolution-oriented user-centric data warehouse</article-title>
          . In: J.
          <string-name>
            <surname>Pokorny</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Repa</surname>
            , and
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Richta</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Wojtkowski</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Linger</surname>
          </string-name>
          , Ch. Barry, M. Lang, editors,
          <source>Proceedings of the 19th International Conference on Information Systems Development</source>
          , Springer,
          <year>2011</year>
          ,
          <fpage>721</fpage>
          -
          <lpage>734</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Common Warehouse Metamodel Specification</surname>
          </string-name>
          ,
          <year>v1</year>
          .
          <fpage>1</fpage>
          . Object Management Group. Available from: http://www.omg.org/cgi-bin/doc?formal/03-03-02.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Solodovnikova</surname>
          </string-name>
          ,
          <article-title>Building Queries on Multiple Versions of Data Warehouse</article-title>
          . In: H.
          <string-name>
            <surname>-M. Haav</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Kalja, editors,
          <source>Databases and Information Systems V, Selected Papers from the 8th International Baltic Conference DB&amp;IS</source>
          <year>2008</year>
          , IOS Press,
          <year>2008</year>
          ,
          <fpage>75</fpage>
          -
          <lpage>86</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>