If the business process model prescribes the activities and their execution
constraints in a complete fashion, then the process is structured. The different
options for decisions that will be made during the enactment of the process
have been defined at design time. For instance, a credit request process might
use a threshold amount to decide whether a simple or a complex credit check
is required, for instance, 5000 euros. Each process instance then uses the
requested amount to decide on the branch to take.
Leymann and Roller have organized business processes according to dimensions
structure and repetition. They coined the term production workflow.
Production workflows are well structured and highly repetitive. Traditional
workflow management system functionality is well suited to supporting production
workflows.
If process participants who have the experience and competence to decide
on their working procedures perform business process activities, structured
processes are more of an obstacle than an asset. Skipping certain process activities
the knowledge worker does not require or executing steps concurrently
that are ordered sequentially in the process model is not possible in structured
business processes.
To better support knowledge workers, business process models can define
processes in a less rigid manner, so that activities can be executed in any order
or even multiple times until the knowledge worker decides that the goals of
these activities have been reached. So called ad hoc activities are an important
concept for supporting unstructured parts of processes.
Case handling is an approach that supports knowledge workers performing
business processes with a low level of structuring and, consequently, a high
level of flexibility. Rather than prescribing control flow constraints between
process activities, fine-grained data dependencies are used to control the enactment
of the business process.
Tuesday, September 1, 2009
Business processes in Degree of Repetition work through control:
Business processes can be classified according to their degree of repetition.
Examples of highly repetitive business processes include business processes
without human involvement, such as online airline ticketing. However, business
processes in which humans are involved can occur frequently, for example,
insurance claim processing. If the degree of repetition is high, then investments
in modelling and supporting the automatic enactment of these processes pay
off, because many process instances can benefit from these investments.
At the other end of the repetition continuum, there are business processes
that occur a few times only. Examples include large engineering efforts, such
as designing a vessel. For these processes it is questionable whether the effort
introduced by process modelling does in fact pay off, because the cost of
process modelling per process instance is very high.
Since improving the collaboration between the persons involved is at the
centre of attention, these processes are called collaborative business processes.
In collaborative business processes, the goal of process modelling and enactment
is not only efficiency, but also tracing exactly what has actually been
done and which causal relationships between project tasks have occurred.
This aspect is also present in the management of scientific experiments,
where data lineage is an important goal of process support. Since each experiment
consists of a set of activities, an increasing fraction of the experimentation
is performed by analyzing data using software systems. The data
is transformed in a series of steps. Since experiments need to be repeatable,
it is essential that the relationship of the data sets be documented properly.
Business processes with a low degree of repetition are often not fully automated
and have a collaborative character, so that the effort in providing
automated solutions is not required, which lowers the cost.
Examples of highly repetitive business processes include business processes
without human involvement, such as online airline ticketing. However, business
processes in which humans are involved can occur frequently, for example,
insurance claim processing. If the degree of repetition is high, then investments
in modelling and supporting the automatic enactment of these processes pay
off, because many process instances can benefit from these investments.
At the other end of the repetition continuum, there are business processes
that occur a few times only. Examples include large engineering efforts, such
as designing a vessel. For these processes it is questionable whether the effort
introduced by process modelling does in fact pay off, because the cost of
process modelling per process instance is very high.
Since improving the collaboration between the persons involved is at the
centre of attention, these processes are called collaborative business processes.
In collaborative business processes, the goal of process modelling and enactment
is not only efficiency, but also tracing exactly what has actually been
done and which causal relationships between project tasks have occurred.
This aspect is also present in the management of scientific experiments,
where data lineage is an important goal of process support. Since each experiment
consists of a set of activities, an increasing fraction of the experimentation
is performed by analyzing data using software systems. The data
is transformed in a series of steps. Since experiments need to be repeatable,
it is essential that the relationship of the data sets be documented properly.
Business processes with a low degree of repetition are often not fully automated
and have a collaborative character, so that the effort in providing
automated solutions is not required, which lowers the cost.
Evolution of Enterprise Systems Architectures:
Process orientation in general and business process management in particular
are parts of a larger development that has been affecting the design of
information systems since its beginning: the evolution of enterprise systems
architectures.
Enterprise systems architectures are mainly composed of information systems.
These systems can be distinguished from software systems in the area
of embedded computing that control physical devices such as mobile phones,
cars, or airplanes. Business process management mainly deals with information
systems in the context of enterprise systems architectures.
The guiding principle of this evolution is separation of concerns, a principle
identified by Edsger Dijkstra and characterized by �focusing one�s attention
upon some aspect.� It is one of the key principles in handling the complexity
of computer systems.
While this principle has many applications in theoretical and applied computer
science, in the context of software systems design�and therefore also in
information systems design�it means identifying sets of related functionality
and packaging them in a subsystem with clearly identified responsibilities and
interfaces. Using this approach, complex and powerful software systems can
be engineered. Separation of concerns also facilitates reuse at a level of coarse
granularity, because well-specified functional units provided by subsystems
can be used by different applications.
Separation of concerns also facilitates response to change and is therefore
an important mechanism to support flexibility of software systems, because
individual subsystems can be modified or even exchanged with another subsystem
providing the same functionality without changing other parts of the
system�provided the interfaces remain stable.
Since local changes do not affect the overall system, a second guiding principle
of computer science is realized: information hiding, originally introduced
by David Parnas. Reasons for changes can be manifold: new requirements in
an ever-changing dynamic market environment, changes in technology, and
changes in legal regulations that need to be reflected in software systems.
While effective response to change is an important goal of any software system,
it is of particular relevance to business process management systems, as
will be detailed below.
Before addressing the evolution of enterprise systems architectures, the
understanding of software architectures as used in this book is described. In
general, software architectures play a central role in handling the complexity
of software systems.
are parts of a larger development that has been affecting the design of
information systems since its beginning: the evolution of enterprise systems
architectures.
Enterprise systems architectures are mainly composed of information systems.
These systems can be distinguished from software systems in the area
of embedded computing that control physical devices such as mobile phones,
cars, or airplanes. Business process management mainly deals with information
systems in the context of enterprise systems architectures.
The guiding principle of this evolution is separation of concerns, a principle
identified by Edsger Dijkstra and characterized by �focusing one�s attention
upon some aspect.� It is one of the key principles in handling the complexity
of computer systems.
While this principle has many applications in theoretical and applied computer
science, in the context of software systems design�and therefore also in
information systems design�it means identifying sets of related functionality
and packaging them in a subsystem with clearly identified responsibilities and
interfaces. Using this approach, complex and powerful software systems can
be engineered. Separation of concerns also facilitates reuse at a level of coarse
granularity, because well-specified functional units provided by subsystems
can be used by different applications.
Separation of concerns also facilitates response to change and is therefore
an important mechanism to support flexibility of software systems, because
individual subsystems can be modified or even exchanged with another subsystem
providing the same functionality without changing other parts of the
system�provided the interfaces remain stable.
Since local changes do not affect the overall system, a second guiding principle
of computer science is realized: information hiding, originally introduced
by David Parnas. Reasons for changes can be manifold: new requirements in
an ever-changing dynamic market environment, changes in technology, and
changes in legal regulations that need to be reflected in software systems.
While effective response to change is an important goal of any software system,
it is of particular relevance to business process management systems, as
will be detailed below.
Before addressing the evolution of enterprise systems architectures, the
understanding of software architectures as used in this book is described. In
general, software architectures play a central role in handling the complexity
of software systems.
Goals, Structure, and Organization in business management:
Arguably, the most important goal of business process management is a
better understanding of the operations a company performs and their relationships.
The explicit representation of business processes is the core concept
to achieving this better understanding.
Identifying the activities and their relationships and representing them
by business process models allows stakeholders to communicate about these
processes in an efficient and effective manner. Using business process models
as common communication artefacts, business processes can be analyzed, and
potentials for improving them can be developed.
Flexibility�the ability to change�is the key operational goal of business
process management. The subjects of change are diverse. Business process
management not only supports changing the organizational environment of
the business process, but also facilitates changes in the software layer without
changing the overall business process. Flexibility in business process management
is discussed in detail in Section 3.10.
A repository of the business processes that a company performs is an
important asset. To some extent, it captures knowledge of how the company
performs its business. Therefore, business process models can be regarded as
a means to expressing knowledge of the operation of a company.
But business process management also facilitates continuous process improvement.
The idea is to evolutionarily improve the organization of work
a company performs. Explicit representations of business processes are well
suited for identifying potentials for improvement, but they can also be used
to compare actual cases with the specified process models. While in principle
more radical business process reengineering activities can also be supported
by business processes, evolutionary measures to improve business processes
might in many cases be the favourable solution.
Business process management also aims at narrowing the gap between
business processes that a company performs and the realization of these processes
in software. The vision is that there is a precisely specified relationship
between an activity in the business process layer and its realization in software.
A metamodel is used to specify the semantics
of control flow patterns. An important part of this book deals with process
modelling techniques and notations. The most important ones are discussed in
a concise manner, including Petri nets, event-driven process chains, workflow
nets, Yet Another Workflow Language, a graph-based workflow language, and
the Business Process Modeling Notation.
better understanding of the operations a company performs and their relationships.
The explicit representation of business processes is the core concept
to achieving this better understanding.
Identifying the activities and their relationships and representing them
by business process models allows stakeholders to communicate about these
processes in an efficient and effective manner. Using business process models
as common communication artefacts, business processes can be analyzed, and
potentials for improving them can be developed.
Flexibility�the ability to change�is the key operational goal of business
process management. The subjects of change are diverse. Business process
management not only supports changing the organizational environment of
the business process, but also facilitates changes in the software layer without
changing the overall business process. Flexibility in business process management
is discussed in detail in Section 3.10.
A repository of the business processes that a company performs is an
important asset. To some extent, it captures knowledge of how the company
performs its business. Therefore, business process models can be regarded as
a means to expressing knowledge of the operation of a company.
But business process management also facilitates continuous process improvement.
The idea is to evolutionarily improve the organization of work
a company performs. Explicit representations of business processes are well
suited for identifying potentials for improvement, but they can also be used
to compare actual cases with the specified process models. While in principle
more radical business process reengineering activities can also be supported
by business processes, evolutionary measures to improve business processes
might in many cases be the favourable solution.
Business process management also aims at narrowing the gap between
business processes that a company performs and the realization of these processes
in software. The vision is that there is a precisely specified relationship
between an activity in the business process layer and its realization in software.
A metamodel is used to specify the semantics
of control flow patterns. An important part of this book deals with process
modelling techniques and notations. The most important ones are discussed in
a concise manner, including Petri nets, event-driven process chains, workflow
nets, Yet Another Workflow Language, a graph-based workflow language, and
the Business Process Modeling Notation.
Enterprise Applications and their Integration:
Based on operating systems and communication systems as a basic abstraction
layer, relational database management systems for storing and retrieving
large amounts of data, and graphical user interface systems, more and more
elaborate information systems could be engineered.
Most of these information systems host enterprise applications. These applications
support enterprises in managing their core assets, including customers,
personnel, products, and resources. Therefore, it is instructive to look
in more detail at enterprise information systems, starting from individual
enterprise applications and addressing the integration of multiple enterprise
applications. The integration of multiple enterprise applications has spawned
a new breed of middleware, enterprise application integration systems. Enterprise
application integration proves to be an important application area of
business process management.
These developments can be illustrated with an enterprise scenario. In the
early stages of enterprise computing, mainframe solutions were developed that
hosted monolithic applications, typically developed in assembler programming
language. These monolithic applications managed all tasks with a single huge
program, including the textual user interface, the application logic, and the
data. Data was mostly stored in files, and the applications accessed data files
through the operating system.
With the advent of database systems, an internal structuring of the system
was achieved: data was managed by a database management system. However,
the application code and the user interface code were not separated from each
other. The user interface provides the desired functionality through textual,
forms-based interfaces.
With lowering cost of computer hardware and growing requirements for
application functionality, more application systems were developed. It was
typical that an enterprise had one software system for human resources management,
one for purchase order management and one for production planning.
Each of these application systems hosted its local data, typically in a
database system, but sometimes even on the file system. In large enterprises,
in different departments, different application systems were sometimes used
to cope with the same issue.
What made things complicated was the fact that these application systems
hosted related data. This means that one logical data object, such as a
customer address, was stored in different data stores managed by different application
systems. Dependencies between data stored in multiple systems were
also represented by dedicated links, for instance through a contract identifier
or an employee identifier.
It is obvious that in these settings changes were hard to implement, because
there are multiple data dependencies between these disparate systems,
and changes in one system had to be mirrored by changes in other systems.
Detecting the systems affected and the particular change required in these
systems was complex and error-prone. As a result, any change of the data
objects, for instance, of a customer address, needed to be reflected in multiple
applications. This lack of integration led to inconsistent data and�in many
cases�to dissatisfied customers.
layer, relational database management systems for storing and retrieving
large amounts of data, and graphical user interface systems, more and more
elaborate information systems could be engineered.
Most of these information systems host enterprise applications. These applications
support enterprises in managing their core assets, including customers,
personnel, products, and resources. Therefore, it is instructive to look
in more detail at enterprise information systems, starting from individual
enterprise applications and addressing the integration of multiple enterprise
applications. The integration of multiple enterprise applications has spawned
a new breed of middleware, enterprise application integration systems. Enterprise
application integration proves to be an important application area of
business process management.
These developments can be illustrated with an enterprise scenario. In the
early stages of enterprise computing, mainframe solutions were developed that
hosted monolithic applications, typically developed in assembler programming
language. These monolithic applications managed all tasks with a single huge
program, including the textual user interface, the application logic, and the
data. Data was mostly stored in files, and the applications accessed data files
through the operating system.
With the advent of database systems, an internal structuring of the system
was achieved: data was managed by a database management system. However,
the application code and the user interface code were not separated from each
other. The user interface provides the desired functionality through textual,
forms-based interfaces.
With lowering cost of computer hardware and growing requirements for
application functionality, more application systems were developed. It was
typical that an enterprise had one software system for human resources management,
one for purchase order management and one for production planning.
Each of these application systems hosted its local data, typically in a
database system, but sometimes even on the file system. In large enterprises,
in different departments, different application systems were sometimes used
to cope with the same issue.
What made things complicated was the fact that these application systems
hosted related data. This means that one logical data object, such as a
customer address, was stored in different data stores managed by different application
systems. Dependencies between data stored in multiple systems were
also represented by dedicated links, for instance through a contract identifier
or an employee identifier.
It is obvious that in these settings changes were hard to implement, because
there are multiple data dependencies between these disparate systems,
and changes in one system had to be mirrored by changes in other systems.
Detecting the systems affected and the particular change required in these
systems was complex and error-prone. As a result, any change of the data
objects, for instance, of a customer address, needed to be reflected in multiple
applications. This lack of integration led to inconsistent data and�in many
cases�to dissatisfied customers.
Traditional Application Development:
The main goal of this section is to categorize business process management
systems from a software systems point of view into major developments that
information systems design underwent in the last decades. It depicts
the first stages in the evolution of information systems. The dates in that figure
provide only rough estimates�the respective systems architectures were not
uncommon at the dates given.
In the early days of computing, applications were developed from scratch,
without taking advantage of prior achievements other than subroutines of
fine granularity. Application programmers needed to code basic functionality
such as, for instance, access to persistent storage and memory management.
Basic functionality needed to be redeveloped in different applications, so that
application programming was a costly and inefficient endeavour. As a result
of the tight coupling of the programmed assembler code with the hardware,
porting an application to a new computer system results in a more or less
complete redevelopment.
Operating systems were developed as the first type of subsystem with
dedicated responsibilities, realizing separation of operating systems concerns
from the application. Operating systems provide programming interfaces to
functionality provided by the computer hardware. Applications can implement
functionality by using interfaces provided by the operating system, realizing
increased efficiency in system development.
Specific properties of the computer hardware could be hidden from the
application by the operating system, so that changes in the hardware could
bereflected by a modified implementation of the operating system�s interface, for
instance, by developing a new driver for a new hardware device. An operating
systems (OS) layer is depicted in as the lowest level subsystem.
systems from a software systems point of view into major developments that
information systems design underwent in the last decades. It depicts
the first stages in the evolution of information systems. The dates in that figure
provide only rough estimates�the respective systems architectures were not
uncommon at the dates given.
In the early days of computing, applications were developed from scratch,
without taking advantage of prior achievements other than subroutines of
fine granularity. Application programmers needed to code basic functionality
such as, for instance, access to persistent storage and memory management.
Basic functionality needed to be redeveloped in different applications, so that
application programming was a costly and inefficient endeavour. As a result
of the tight coupling of the programmed assembler code with the hardware,
porting an application to a new computer system results in a more or less
complete redevelopment.
Operating systems were developed as the first type of subsystem with
dedicated responsibilities, realizing separation of operating systems concerns
from the application. Operating systems provide programming interfaces to
functionality provided by the computer hardware. Applications can implement
functionality by using interfaces provided by the operating system, realizing
increased efficiency in system development.
Specific properties of the computer hardware could be hidden from the
application by the operating system, so that changes in the hardware could
bereflected by a modified implementation of the operating system�s interface, for
instance, by developing a new driver for a new hardware device. An operating
systems (OS) layer is depicted in as the lowest level subsystem.
Enterprise Application Integration in business management:
Enterprises are facing the challenge of integrating complex software systems
in a heterogeneous information technology landscape that has grown in an
evolutionary way for years, if not for decades. Most of the application systems
have been developed independently of each other, and each application stores
its data locally, either in a database system or some other data store, leading
to siloed applications.
Data heterogeneity issues occur if a logical data item�for instance, a
customer address�is stored multiple times in different siloed applications.
Assume that customer data is stored in an enterprise resource planning system
and a customer relationship management system. Although both systems use
a relational database as storage facility, the data structures will be different
and not immediately comparable.
These differences involve both the types of particular data fields (strings
of different length for attribute CustomerName), but also the names of the
attributes. In the customer example, in one system the attribute CAddr will
denote the address of the customer, while in the other system the attribute
StreetAdrC denotes the address.
The next level of heterogeneity regards the semantics of the attributes.
Assume there is an attribute Price in the product tables of two application
systems. The naming of the attribute does not indicate whether the price
includes or excludes value-added tax. These semantic differences need to be
sorted out if the systems are integrated. Data integration technologies are
used to cope with these syntactic and semantic difficulties.
Data integration is an important aspect in enterprise application integration.
In this section, the traditional point-to-point enterprise application.
in a heterogeneous information technology landscape that has grown in an
evolutionary way for years, if not for decades. Most of the application systems
have been developed independently of each other, and each application stores
its data locally, either in a database system or some other data store, leading
to siloed applications.
Data heterogeneity issues occur if a logical data item�for instance, a
customer address�is stored multiple times in different siloed applications.
Assume that customer data is stored in an enterprise resource planning system
and a customer relationship management system. Although both systems use
a relational database as storage facility, the data structures will be different
and not immediately comparable.
These differences involve both the types of particular data fields (strings
of different length for attribute CustomerName), but also the names of the
attributes. In the customer example, in one system the attribute CAddr will
denote the address of the customer, while in the other system the attribute
StreetAdrC denotes the address.
The next level of heterogeneity regards the semantics of the attributes.
Assume there is an attribute Price in the product tables of two application
systems. The naming of the attribute does not indicate whether the price
includes or excludes value-added tax. These semantic differences need to be
sorted out if the systems are integrated. Data integration technologies are
used to cope with these syntactic and semantic difficulties.
Data integration is an important aspect in enterprise application integration.
In this section, the traditional point-to-point enterprise application.
Enterprise Resource Planning Systems in business process management:
Enterprise Resource Planning systems (ERP systems) were
developed. The great achievement of enterprise resource planning systems is
that they provide an integrated database that spans large parts of an organization.
Enterprise resource planning systems basically reimplemented these disparate
enterprise application systems on the basis of an integrated and
consistent database.
An enterprise resource planning system stores its data in one centralized
database, and a set of application modules provides the desired functionality,
including human resources, financials, and manufacturing. Enterprise resource
planning systems have effectively replaced numerous heterogeneous enterprise
applications, thereby solving the problem of integrating them.
Fig. 2.3. Two-tier client-server architecture
Enterprise resource planning systems are accessed by client applications,
These client applications access an application server
that issues requests to a database server. We do not address the architectures
of enterprise systems in detail but stress the integrated data storage and the
remote access through client software.
With the growth of enterprises and new market requirements, driven by
new customer needs around the year 2000, the demand for additional functionality
arose, and new types of software systems entered the market. The most
prominent types of software systems are supply chain management systems,
or SCM systems, and customer relationship management systems, or CRM
systems. While basic functionality regarding supply chain management has
already been realized in enterprise resource planning systems, new challenges
due to increased market dynamics have led to dedicated supply chain management
systems. The main goal of these systems is to support the planning,
operation, and control of supply chains, including inventory management,
warehouse management, management of suppliers and distributors, and demand
planning.
Regarding the evolution of enterprise systems architectures, the main point
is that new types of information systems have entered the market, often developed
by different vendors than that of the enterprise resource planning
system many companies run. At the technical level, the supply chain management
system hosts its own database, with data related to supply chains. Since
large amounts of data are relevant for both enterprise resource planning and
supply chain management, data is stored redundantly. As a result, system architects
face the same problems as they did years ago with the heterogeneous
enterprise applications.
As with the settings mentioned, in order to avoid data inconsistencies and,
at the end of the day, dissatisfied customers, any modification of data needs
to be transmitted to all systems that host redundant copies of the data. If, for
example, information on a logistics partner changes that is relevant for both
the enterprise resource planning system and the supply chain management
system, then this change needs to be reflected in both systems. From a data
integrity point of view, this change even needs to take place within a single
distributed transaction, so that multiple concurrent changes do not interfere
with each other.
The source of the problem is, again, redundant information spread across
multiple application systems. Since this information is not integrated, the
user of an enterprise resource planning system can access only the information
stored in this system. However, the customer relationship management system
also holds valuable data of this customer.
When the customer calls and the call centre personnel can only access the
information stored in one system, and is therefore not aware of the complete
status of the customer, the customer is likely to become upset; at least, he does
not feel well served. The customer expects better service, where the personnel
is aware of complete status and not just of partial status that happens to be
stored in the software system that the call centre agent can access. In the
scenario discussed, the call centre agent needs to know the complete status
of the customer, no matter in which software system the information might be
buried.
developed. The great achievement of enterprise resource planning systems is
that they provide an integrated database that spans large parts of an organization.
Enterprise resource planning systems basically reimplemented these disparate
enterprise application systems on the basis of an integrated and
consistent database.
An enterprise resource planning system stores its data in one centralized
database, and a set of application modules provides the desired functionality,
including human resources, financials, and manufacturing. Enterprise resource
planning systems have effectively replaced numerous heterogeneous enterprise
applications, thereby solving the problem of integrating them.
Fig. 2.3. Two-tier client-server architecture
Enterprise resource planning systems are accessed by client applications,
These client applications access an application server
that issues requests to a database server. We do not address the architectures
of enterprise systems in detail but stress the integrated data storage and the
remote access through client software.
With the growth of enterprises and new market requirements, driven by
new customer needs around the year 2000, the demand for additional functionality
arose, and new types of software systems entered the market. The most
prominent types of software systems are supply chain management systems,
or SCM systems, and customer relationship management systems, or CRM
systems. While basic functionality regarding supply chain management has
already been realized in enterprise resource planning systems, new challenges
due to increased market dynamics have led to dedicated supply chain management
systems. The main goal of these systems is to support the planning,
operation, and control of supply chains, including inventory management,
warehouse management, management of suppliers and distributors, and demand
planning.
Regarding the evolution of enterprise systems architectures, the main point
is that new types of information systems have entered the market, often developed
by different vendors than that of the enterprise resource planning
system many companies run. At the technical level, the supply chain management
system hosts its own database, with data related to supply chains. Since
large amounts of data are relevant for both enterprise resource planning and
supply chain management, data is stored redundantly. As a result, system architects
face the same problems as they did years ago with the heterogeneous
enterprise applications.
As with the settings mentioned, in order to avoid data inconsistencies and,
at the end of the day, dissatisfied customers, any modification of data needs
to be transmitted to all systems that host redundant copies of the data. If, for
example, information on a logistics partner changes that is relevant for both
the enterprise resource planning system and the supply chain management
system, then this change needs to be reflected in both systems. From a data
integrity point of view, this change even needs to take place within a single
distributed transaction, so that multiple concurrent changes do not interfere
with each other.
The source of the problem is, again, redundant information spread across
multiple application systems. Since this information is not integrated, the
user of an enterprise resource planning system can access only the information
stored in this system. However, the customer relationship management system
also holds valuable data of this customer.
When the customer calls and the call centre personnel can only access the
information stored in one system, and is therefore not aware of the complete
status of the customer, the customer is likely to become upset; at least, he does
not feel well served. The customer expects better service, where the personnel
is aware of complete status and not just of partial status that happens to be
stored in the software system that the call centre agent can access. In the
scenario discussed, the call centre agent needs to know the complete status
of the customer, no matter in which software system the information might be
buried.
The hub-and-spoke paradigm is based on a centralized hub and a number of
The centralized enterprise application integration middleware represents the
hub, and the applications to be integrated are reflected by the spokes. The
applications interact with each other via the centralized enterprise application
integration hub.
It is an important feature of hub-and-spoke architectures that the sender of
a message need not encode the receiver of the message. Instead, each message
is sent to the enterprise application integration hub. The hub is configured in
such a way that the message structure and content can be used to automatically
detect the receiver or receivers of a message.
The advantage of these centralized middleware architectures is that the
number of connections can be reduced. No longer are connections in the order
of N � N required to connect N application systems. Since each application
system is attached to the centralized hub, N interfaces will suffice. Using these
interfaces, the specific relationships between the applications can be reflected
in the configuration of the middleware.
The centralized hub provides
adapters that hide the heterogeneity of the application systems from
each other. Each application system requires the development of a dedicated
adapter to attach to the hub.
Depending on the complexity of these systems�and the availability of
generic adapters provided by the enterprise application integration vendor�
the development of the adapter might consume considerable resources. When
the adapters are in place and the hub is configured, the applications can
interact with each other in an integrated manner.
On a technical level, message brokers can be used to realize a hub-andspoke
enterprise application integration system. Message brokers are software
systems that allow a user to define rules for communication between applications.
Therefore, the burden of implementing�and changing�communication
structures is taken away from applications. By defining in a declarative way
how communication between applications takes place, implementation is redeemed
by declaration, i.e., by the declaration of the communication structures.
Response to change is improved, because the sender is not required to
implement these changes locally. These changes can be specified in a declarative
way in the central hub, rather than by coding in the applications.
The hub uses rules to manage the dependencies between the applications.
Based on these rules, the hub can use information on the identity of the sender,
the message type, and the message content to decide on which message queues
to relay a message received. Besides relaying messages to recipients, message
brokers also transform messages to realize data mapping between the applications,
so that data heterogeneity issues can be handled appropriately.
Adapters of application systems are used to perform these message transformations.
As shown in Figure 2.8, each application is linked to the message broker,
reflected by the directed arcs from the applications to the message broker, in
particular, to the rule evaluation component of the message broker. On receipt
of a message, the message broker evaluates the rules and inserts the message
into the queues of the recipients.
The queues are used for guaranteed delivery of messages. Note that any
change in the communication is handled through the message broker: by establishing
new rules or by adapting existing rules, these changes can be realized.
There is no implementation effort required for realizing these changes; just a
modification of the declarative rules.
Publish/subscribe is a mechanism to link applications to message brokers.
The idea is that applications can subscribe to certain messages or types of
messages. Applications can also publish messages. The information received
by publish and subscribe are used by the enterprise application integration
hub to realize the relaying of messages. it also shows that at a technical
level enterprise application integration with a message broker relies on adapters
that are used for transforming data and protocols between senders
and receivers.
hub, and the applications to be integrated are reflected by the spokes. The
applications interact with each other via the centralized enterprise application
integration hub.
It is an important feature of hub-and-spoke architectures that the sender of
a message need not encode the receiver of the message. Instead, each message
is sent to the enterprise application integration hub. The hub is configured in
such a way that the message structure and content can be used to automatically
detect the receiver or receivers of a message.
The advantage of these centralized middleware architectures is that the
number of connections can be reduced. No longer are connections in the order
of N � N required to connect N application systems. Since each application
system is attached to the centralized hub, N interfaces will suffice. Using these
interfaces, the specific relationships between the applications can be reflected
in the configuration of the middleware.
The centralized hub provides
adapters that hide the heterogeneity of the application systems from
each other. Each application system requires the development of a dedicated
adapter to attach to the hub.
Depending on the complexity of these systems�and the availability of
generic adapters provided by the enterprise application integration vendor�
the development of the adapter might consume considerable resources. When
the adapters are in place and the hub is configured, the applications can
interact with each other in an integrated manner.
On a technical level, message brokers can be used to realize a hub-andspoke
enterprise application integration system. Message brokers are software
systems that allow a user to define rules for communication between applications.
Therefore, the burden of implementing�and changing�communication
structures is taken away from applications. By defining in a declarative way
how communication between applications takes place, implementation is redeemed
by declaration, i.e., by the declaration of the communication structures.
Response to change is improved, because the sender is not required to
implement these changes locally. These changes can be specified in a declarative
way in the central hub, rather than by coding in the applications.
The hub uses rules to manage the dependencies between the applications.
Based on these rules, the hub can use information on the identity of the sender,
the message type, and the message content to decide on which message queues
to relay a message received. Besides relaying messages to recipients, message
brokers also transform messages to realize data mapping between the applications,
so that data heterogeneity issues can be handled appropriately.
Adapters of application systems are used to perform these message transformations.
As shown in Figure 2.8, each application is linked to the message broker,
reflected by the directed arcs from the applications to the message broker, in
particular, to the rule evaluation component of the message broker. On receipt
of a message, the message broker evaluates the rules and inserts the message
into the queues of the recipients.
The queues are used for guaranteed delivery of messages. Note that any
change in the communication is handled through the message broker: by establishing
new rules or by adapting existing rules, these changes can be realized.
There is no implementation effort required for realizing these changes; just a
modification of the declarative rules.
Publish/subscribe is a mechanism to link applications to message brokers.
The idea is that applications can subscribe to certain messages or types of
messages. Applications can also publish messages. The information received
by publish and subscribe are used by the enterprise application integration
hub to realize the relaying of messages. it also shows that at a technical
level enterprise application integration with a message broker relies on adapters
that are used for transforming data and protocols between senders
and receivers.
Enterprise application integration technology:
Enterprise application integration technology is based on middleware technology
that has been around for years. The goal is to take advantage of these
technologies so that data in heterogeneous information technology landscapes
can be integrated properly. In addition to data integration, the processes that
the application systems realize also need to be integrated. This means that
one system performs certain steps and then transfers control to another system
which takes the results and continues operation. In the context of this
book, the process integration part of enterprise application integration is at
the centre of attention.
Enterprise application integration faces the problem that each integration
project requires design and implementation efforts that might be considerable.
When directly linking each pair of applications, system integrators run into
the N � N problem, meaning that the number of interfaces to develop rises
to the square of the number N of applications to be integrated.
A sketch of this integration issue is represented in Figure 2.5, where N = 6
of siloed applications and their integration links are shown. Each link represents
an interface that connects the application systems associated with it.
Therefore, the number of interfaces between pairs of application systems
realize grows to the order of N �N, incurring considerable overhead. If there
were links between any pairs of application systems, then the number of interfaces
to develop would be 5 + 4 + 3 + 2 + 1 = 15. In the general case, the
number of links between N application systems is and therefore rises to the square
of the number of application systems. In the
scenario shown, not all pairs of application systems are connected, but the
problem of the large number of interfaces can nevertheless be seen .
In enterprise computing, changes are abundant, and a systems architecture
should support changes in an efficient and effective manner. The enterprise
application integration architecture resulting from point-to-point integration
does not respond well to changes. The reason is due to the hard-wiring of the
interfaces. Any change in the application landscape requires adaptation of the
respective interfaces. This adaptation is typically realized by reprogramming
interfaces, which requires considerable resources.
A specific realization platform of enterprise application integration is
message-oriented middleware, where applications communicate by sending
and receiving messages. While conceptually the middleware realizes a centralized
component, the direct connection between the applications�and therefore
the point-to-point integration�is still in place, because each sender needs
to encode the receiver of a message.
The main aspect of message-oriented middleware is execution guarantees,
such as guaranteed message delivery. However, the problem mentioned above
is not solved, since any change in the application landscape needs to be implemented
by changing the communication structure of applications.
that has been around for years. The goal is to take advantage of these
technologies so that data in heterogeneous information technology landscapes
can be integrated properly. In addition to data integration, the processes that
the application systems realize also need to be integrated. This means that
one system performs certain steps and then transfers control to another system
which takes the results and continues operation. In the context of this
book, the process integration part of enterprise application integration is at
the centre of attention.
Enterprise application integration faces the problem that each integration
project requires design and implementation efforts that might be considerable.
When directly linking each pair of applications, system integrators run into
the N � N problem, meaning that the number of interfaces to develop rises
to the square of the number N of applications to be integrated.
A sketch of this integration issue is represented in Figure 2.5, where N = 6
of siloed applications and their integration links are shown. Each link represents
an interface that connects the application systems associated with it.
Therefore, the number of interfaces between pairs of application systems
realize grows to the order of N �N, incurring considerable overhead. If there
were links between any pairs of application systems, then the number of interfaces
to develop would be 5 + 4 + 3 + 2 + 1 = 15. In the general case, the
number of links between N application systems is and therefore rises to the square
of the number of application systems. In the
scenario shown, not all pairs of application systems are connected, but the
problem of the large number of interfaces can nevertheless be seen .
In enterprise computing, changes are abundant, and a systems architecture
should support changes in an efficient and effective manner. The enterprise
application integration architecture resulting from point-to-point integration
does not respond well to changes. The reason is due to the hard-wiring of the
interfaces. Any change in the application landscape requires adaptation of the
respective interfaces. This adaptation is typically realized by reprogramming
interfaces, which requires considerable resources.
A specific realization platform of enterprise application integration is
message-oriented middleware, where applications communicate by sending
and receiving messages. While conceptually the middleware realizes a centralized
component, the direct connection between the applications�and therefore
the point-to-point integration�is still in place, because each sender needs
to encode the receiver of a message.
The main aspect of message-oriented middleware is execution guarantees,
such as guaranteed message delivery. However, the problem mentioned above
is not solved, since any change in the application landscape needs to be implemented
by changing the communication structure of applications.
The business motivation behind interacting business processes stems from business to business:
The business motivation behind interacting business processes stems from
value systems, which represent collaborations between the value chains of
multiple companies. These high-level collaborations are realized by interacting
business processes, each of which is run by one company in a business to
business process scenario. This section studies interactions between business
processes performed by different companies.
For the sake of concreteness, this section uses an example from the area of
order processing, described as follows. A buyer orders goods from a reseller,
who acts as an intermediary. The reseller sends a respective product request to
a manufacturer, who delivers to product to the buyer. In addition, the reseller
asks a payment organization to take care of the billing.
The manufacturer then ships the products to the buyer.
The value system shown on a high level of abstraction is detailled
Note that for each value chain in the value system shown
there is a participant in the business-to-business collaboration,
detailling its internal structure and its contribution to the collaboration.
There are many interesting issues to study: how do we make sure that the
business-to-business process created by putting together a set of existing business
processes really fulfils its requirements? Structural criteria, for instance,
absence from deadlock, need to be valid for these processes.
The problem is aggravated by the fact that internal business processes are
an important asset of enterprises. Therefore, few enterprises like to expose
their internal processes to the outside world. This means that
the properties of the overall business-to-business collaboration cannot be based on the actual
detailed local processes run by the enterprises, but rather on the externally
visible behaviour and the associated models to represent it.
value systems, which represent collaborations between the value chains of
multiple companies. These high-level collaborations are realized by interacting
business processes, each of which is run by one company in a business to
business process scenario. This section studies interactions between business
processes performed by different companies.
For the sake of concreteness, this section uses an example from the area of
order processing, described as follows. A buyer orders goods from a reseller,
who acts as an intermediary. The reseller sends a respective product request to
a manufacturer, who delivers to product to the buyer. In addition, the reseller
asks a payment organization to take care of the billing.
The manufacturer then ships the products to the buyer.
The value system shown on a high level of abstraction is detailled
Note that for each value chain in the value system shown
there is a participant in the business-to-business collaboration,
detailling its internal structure and its contribution to the collaboration.
There are many interesting issues to study: how do we make sure that the
business-to-business process created by putting together a set of existing business
processes really fulfils its requirements? Structural criteria, for instance,
absence from deadlock, need to be valid for these processes.
The problem is aggravated by the fact that internal business processes are
an important asset of enterprises. Therefore, few enterprises like to expose
their internal processes to the outside world. This means that
the properties of the overall business-to-business collaboration cannot be based on the actual
detailed local processes run by the enterprises, but rather on the externally
visible behaviour and the associated models to represent it.
Organizational Business Processes in business:
The early 1990s saw process orientation as a strong development not only to
capture the activities a company performs, but also to study and improve the
relationships between these activities.
The general approach
of business process reengineering is a holistic view on an enterprise where
business processes are the main instrument for organizing the operations of
an enterprise. Business process reengineering is based on the understanding
that the products and services a company offers to the market are provided
through business processes, and a radical redesign of these processes is the
road to success.
Process orientation is based on a critical analysis of Taylorism as a concept
to organize work, originally introduced by Frederick Taylor to improve industrial
efficiency. This approach uses functional breakdown of complex work to
small granularities, so that a highly specialized work force can efficiently conduct
these work units of small granularity. Taylorism has been very successful
in manufacturing and has, as such, fuelled the industrial revolution in the late
eighteenth and early nineteenth century considerably.
Small-grained activities conducted by highly specialized personnel require
many handovers of work in order to process a given task. In early manufacturing
in the late eighteenth and early nineteenth century the products
were typically assembled in a few steps only, so that handovers of work did
not introduce delays. In addition, the task were of a rather simple nature, so
that no context information on previously conducted steps was required for a
particular worker.
Using Taylorism to organize work in modern organizations proved inefficient,
because the steps during a business process are often related to
each other. Context information on the complete case is required during the
process. The handovers of work cause a major problem, since each worker
involved requires knowledge on the overall case. For this reason, the functional
breakdown of work in fine-granular pieces that proved effective in early
manufacturing proves inefficient in modern business organizations that mainly
process information.
From a process perspective, it is instrumental to combining multiple units
of work of small granularity into work units of larger granularity. Thereby, the
handover of work can be reduced. But this approach requires workers to have
broad skills and competencies, i.e., it requires knowledge workers who have a
broad understanding of the ultimate goals of their work.
At an organizational level, process orientation has led to the characterization
of the operations of an enterprise using business processes. While there
are different approaches, they have in common the fact that the top-level business
processes are expressed in an informal way, often even in plain English
text. Also each enterprise should not have more than about a dozen organisational
business processes. These processes are often described by the same
symbols as those used for value systems, but the reader should be aware of
the fact that different levels of abstraction are in place.
The structure of organization-level business process management .
The business process management space is influenced by the
business strategy of the enterprise, i.e., by the target markets, by business
strategies opening new opportunities, and, in general, by the overall strategic
goals of the enterprise.
Information systems, shown in the lower part of , are valuable
assets that knowledge workers can take advantage of.
capture the activities a company performs, but also to study and improve the
relationships between these activities.
The general approach
of business process reengineering is a holistic view on an enterprise where
business processes are the main instrument for organizing the operations of
an enterprise. Business process reengineering is based on the understanding
that the products and services a company offers to the market are provided
through business processes, and a radical redesign of these processes is the
road to success.
Process orientation is based on a critical analysis of Taylorism as a concept
to organize work, originally introduced by Frederick Taylor to improve industrial
efficiency. This approach uses functional breakdown of complex work to
small granularities, so that a highly specialized work force can efficiently conduct
these work units of small granularity. Taylorism has been very successful
in manufacturing and has, as such, fuelled the industrial revolution in the late
eighteenth and early nineteenth century considerably.
Small-grained activities conducted by highly specialized personnel require
many handovers of work in order to process a given task. In early manufacturing
in the late eighteenth and early nineteenth century the products
were typically assembled in a few steps only, so that handovers of work did
not introduce delays. In addition, the task were of a rather simple nature, so
that no context information on previously conducted steps was required for a
particular worker.
Using Taylorism to organize work in modern organizations proved inefficient,
because the steps during a business process are often related to
each other. Context information on the complete case is required during the
process. The handovers of work cause a major problem, since each worker
involved requires knowledge on the overall case. For this reason, the functional
breakdown of work in fine-granular pieces that proved effective in early
manufacturing proves inefficient in modern business organizations that mainly
process information.
From a process perspective, it is instrumental to combining multiple units
of work of small granularity into work units of larger granularity. Thereby, the
handover of work can be reduced. But this approach requires workers to have
broad skills and competencies, i.e., it requires knowledge workers who have a
broad understanding of the ultimate goals of their work.
At an organizational level, process orientation has led to the characterization
of the operations of an enterprise using business processes. While there
are different approaches, they have in common the fact that the top-level business
processes are expressed in an informal way, often even in plain English
text. Also each enterprise should not have more than about a dozen organisational
business processes. These processes are often described by the same
symbols as those used for value systems, but the reader should be aware of
the fact that different levels of abstraction are in place.
The structure of organization-level business process management .
The business process management space is influenced by the
business strategy of the enterprise, i.e., by the target markets, by business
strategies opening new opportunities, and, in general, by the overall strategic
goals of the enterprise.
Information systems, shown in the lower part of , are valuable
assets that knowledge workers can take advantage of.
Enterprise Modelling and Process Orientation:
In addition to developments in software architecture, business administration
also contributed to the rise of business process management. There were two
major factors that fuelled workflow management and business process management.
Value chains as a means to functionally break down the activities a
company performs and to analyze their contribution to the commercial success
of the company, and process orientation as the way to organize the activities
of enterprises.Value chains are a well-known approach in business administration to organize
the work that a company conducts to achieve its business goals.
Value chains were developed by Michael Porter to organize high-level business functions and
to relate them to each other, providing an understanding of how a company
operates.
Porter states that �the configuration of each activity embodies the way
that activity is performed, including the types of human and physical assets
employed and the associated organizational arrangements� and he continues
to look at the enterprise and its ecology by stating that �gaining and sustaining
competitive advantage depends on understanding not only a firm�s value chain
but how the firm fits in the overall value system.�
In order to fulfil their business goals, companies cooperate with each other,
i.e., the value chains of these companies are related to each other. The ecology
of the value chains of cooperating enterprises is called value system. Each value
system consists of a number of value chains, each of which is associated with
one enterprise.
The value chain of a company has a rich internal structure, which is represented
by a set of coarse-grained business functions. These high-level business
functions, for instance, order management and human resources, can be broken
down into smaller functional units, spanning a hierarchical structure of
business functions of different granularity.
The process of breaking down a coarse-grained function into finer-grained
functions is called functional decomposition. Functional decomposition is an
important concept to capture and manage complexity. For instance, order
management can be broken down into business functions to obtain and store
an order and to check an order.
also contributed to the rise of business process management. There were two
major factors that fuelled workflow management and business process management.
Value chains as a means to functionally break down the activities a
company performs and to analyze their contribution to the commercial success
of the company, and process orientation as the way to organize the activities
of enterprises.Value chains are a well-known approach in business administration to organize
the work that a company conducts to achieve its business goals.
Value chains were developed by Michael Porter to organize high-level business functions and
to relate them to each other, providing an understanding of how a company
operates.
Porter states that �the configuration of each activity embodies the way
that activity is performed, including the types of human and physical assets
employed and the associated organizational arrangements� and he continues
to look at the enterprise and its ecology by stating that �gaining and sustaining
competitive advantage depends on understanding not only a firm�s value chain
but how the firm fits in the overall value system.�
In order to fulfil their business goals, companies cooperate with each other,
i.e., the value chains of these companies are related to each other. The ecology
of the value chains of cooperating enterprises is called value system. Each value
system consists of a number of value chains, each of which is associated with
one enterprise.
The value chain of a company has a rich internal structure, which is represented
by a set of coarse-grained business functions. These high-level business
functions, for instance, order management and human resources, can be broken
down into smaller functional units, spanning a hierarchical structure of
business functions of different granularity.
The process of breaking down a coarse-grained function into finer-grained
functions is called functional decomposition. Functional decomposition is an
important concept to capture and manage complexity. For instance, order
management can be broken down into business functions to obtain and store
an order and to check an order.
Technical Integration Challenges in Challenges for Workflow Management:
While system workflows are well equipped to support the process aspect of
enterprise application integration scenarios, the same technical integration
problems need to be solved in system workflow projects as those in traditional
enterprise application integration projects.
Application systems that need to be integrated are typically not equipped
with well-documented interfaces that can be used to get hold of the required
functionality. Functionality of application systems might also be implemented
in the graphical user interfaces, so that low-level implementation work is required
to access the application system functionality.
Another important source of trouble is relationships between different applications
at the code level. Direct invocation between software systems is an
example of these relationships, so that an invocation of an application system
automatically spawns off an invocation of another application system. In
these settings, the overall process flow is in part realized at the application
code level, so that the workflow management system is capable of controlling
only parts of the actual process flow, but not the complete process.
The granularity of the workflow activities and the granularity of the functionality
provided by the underlying application systems might be different.
Fine-granular business activities might have been designed in the process
model that cannot be realized, because the underlying application system
only provides coarse-grained functionality. In some cases, the interface to the
application can be modified so that fine-grained functionality is available.
This alternative is likely to incur considerable cost, or it might be impossible
for some applications. Another alternative is changing the granularity of
the business activities. In this case, certain properties of the process might
not be realizable�for instance, the concurrent execution of two fine-granular
activities. As a result, the run time of the workflow will not be optimal.
Service-oriented architectures and service-enabling of legacy applications
are important concepts currently being investigated to address these technical
problems.
enterprise application integration scenarios, the same technical integration
problems need to be solved in system workflow projects as those in traditional
enterprise application integration projects.
Application systems that need to be integrated are typically not equipped
with well-documented interfaces that can be used to get hold of the required
functionality. Functionality of application systems might also be implemented
in the graphical user interfaces, so that low-level implementation work is required
to access the application system functionality.
Another important source of trouble is relationships between different applications
at the code level. Direct invocation between software systems is an
example of these relationships, so that an invocation of an application system
automatically spawns off an invocation of another application system. In
these settings, the overall process flow is in part realized at the application
code level, so that the workflow management system is capable of controlling
only parts of the actual process flow, but not the complete process.
The granularity of the workflow activities and the granularity of the functionality
provided by the underlying application systems might be different.
Fine-granular business activities might have been designed in the process
model that cannot be realized, because the underlying application system
only provides coarse-grained functionality. In some cases, the interface to the
application can be modified so that fine-grained functionality is available.
This alternative is likely to incur considerable cost, or it might be impossible
for some applications. Another alternative is changing the granularity of
the business activities. In this case, certain properties of the process might
not be realizable�for instance, the concurrent execution of two fine-granular
activities. As a result, the run time of the workflow will not be optimal.
Service-oriented architectures and service-enabling of legacy applications
are important concepts currently being investigated to address these technical
problems.
Challenges for Workflow Management in Lack of Adequate Support for Knowledge Workers:
In contrast to many developments in software architecture and technology,
workflow management systems have massive effects on the daily work for their
users. The method of data storage and whether the program was developed
with a procedural programming language or an object oriented programming
language are relevant only for system designers and developers; these implementation
aspects do not matter for the users of these systems. Therefore,
special care has to be taken in the rollout of workflow applications; early participation
of users in the design of these systems is important to avoid user
acceptance issues.
Workflow management systems represent not only processes but also the
organizational environment in which these processes are executed. This means
that persons are represented by their skills, competences, and organizational
positioning. This information is used to select persons to perform certain
activities. The active selection of persons by the workflow management system
has not been considered appropriate, since human workers felt that a machine
burdened them with additional work. This feeling might also be due to crude
interfaces of early workflow management systems.
The role of knowledge workers is another area where traditional workflow
management systems scored low. Workflow models prescribe the process flow,
and a workflow management system makes sure that the workflow is performed
just as it is described. This also means that there is little room for creativity
for the knowledge worker. Any process instance that has not been envisioned
by the process designer cannot be realized. This might lead to situations where
certain parts of the overall business process are not handled by the workflow
management system. Sometimes, even paper-based solutions were used by the
knowledge workers, leading to inconsistent states in the overall process.
workflow management systems have massive effects on the daily work for their
users. The method of data storage and whether the program was developed
with a procedural programming language or an object oriented programming
language are relevant only for system designers and developers; these implementation
aspects do not matter for the users of these systems. Therefore,
special care has to be taken in the rollout of workflow applications; early participation
of users in the design of these systems is important to avoid user
acceptance issues.
Workflow management systems represent not only processes but also the
organizational environment in which these processes are executed. This means
that persons are represented by their skills, competences, and organizational
positioning. This information is used to select persons to perform certain
activities. The active selection of persons by the workflow management system
has not been considered appropriate, since human workers felt that a machine
burdened them with additional work. This feeling might also be due to crude
interfaces of early workflow management systems.
The role of knowledge workers is another area where traditional workflow
management systems scored low. Workflow models prescribe the process flow,
and a workflow management system makes sure that the workflow is performed
just as it is described. This also means that there is little room for creativity
for the knowledge worker. Any process instance that has not been envisioned
by the process designer cannot be realized. This might lead to situations where
certain parts of the overall business process are not handled by the workflow
management system. Sometimes, even paper-based solutions were used by the
knowledge workers, leading to inconsistent states in the overall process.
Enterprise Services and Service-Oriented Architectures:
The roles in service-oriented architectures as discussed above are not completely
filled in typical enterprise scenarios. The specification of services is typically
done by the provider of the service, i.e., by the system architects
responsible for service-enabling the particular application.
The service registry is installed locally, and its access by other companies
is usually disallowed. The most striking difference to service-oriented architectures
as defined by Burbeck is the absence of dynamic matchmaking. As
enterprise services are developed, they are specified and registered in a local
registry. When a new composite application is developed, the designers consult
the registry to find suitable services that can be used to perform certain
tasks in the composite application. This search is a manual process, which in
some cases is assisted by a taxonomy and a textual description of the services.
There are a number of hard problems in this context that are unsolved
today. One of the main problems regards the scoping of services: the functionality
provided by one or more application systems that is suitable for an
enterprise service. If the granularity is small, then the level of reuse is small
too, because many enterprise services need to be composed to achieve the
desired functionality.
If on the other hand the granularity is large, then there might be only
few scenarios where the enterprise service fits well and where using it makes
sense. Tailoring of services of large granularity is also not a valid option, since
extensive tailoring hampers reuse. As in many related cases, there is no general
answer to this question. The choice of a suitable service granularity depends on
the particular usage scenario and on the properties of the application systems
to integrate and the composite applications to develop.
In enterprise services architectures, each enterprise service is typically associated
with exactly one application system. This is a limitation, since building
an enterprise service on top of a number of related back-end application
systems involves system integration, so that reuse is simplified.
To illustrate this concept, an example is introduced. Consider a purchase
order enterprise service in which an incoming purchase order needs to be stored
in multiple back-end application systems. In this case, the enterprise service
can be used with ease, since it is invoked once by a composite application and
it automatically provides the integration of the back-end system by storing
the purchase order�with the relevant data mappings to cater to data type
heterogeneity�in the respective back-end application systems.
An integration of legacy systems can be realized within an enterprise service.
This allows using enterprise services at a higher level of granularity, so
that integration work can actually be reused in multiple composite applications.
filled in typical enterprise scenarios. The specification of services is typically
done by the provider of the service, i.e., by the system architects
responsible for service-enabling the particular application.
The service registry is installed locally, and its access by other companies
is usually disallowed. The most striking difference to service-oriented architectures
as defined by Burbeck is the absence of dynamic matchmaking. As
enterprise services are developed, they are specified and registered in a local
registry. When a new composite application is developed, the designers consult
the registry to find suitable services that can be used to perform certain
tasks in the composite application. This search is a manual process, which in
some cases is assisted by a taxonomy and a textual description of the services.
There are a number of hard problems in this context that are unsolved
today. One of the main problems regards the scoping of services: the functionality
provided by one or more application systems that is suitable for an
enterprise service. If the granularity is small, then the level of reuse is small
too, because many enterprise services need to be composed to achieve the
desired functionality.
If on the other hand the granularity is large, then there might be only
few scenarios where the enterprise service fits well and where using it makes
sense. Tailoring of services of large granularity is also not a valid option, since
extensive tailoring hampers reuse. As in many related cases, there is no general
answer to this question. The choice of a suitable service granularity depends on
the particular usage scenario and on the properties of the application systems
to integrate and the composite applications to develop.
In enterprise services architectures, each enterprise service is typically associated
with exactly one application system. This is a limitation, since building
an enterprise service on top of a number of related back-end application
systems involves system integration, so that reuse is simplified.
To illustrate this concept, an example is introduced. Consider a purchase
order enterprise service in which an incoming purchase order needs to be stored
in multiple back-end application systems. In this case, the enterprise service
can be used with ease, since it is invoked once by a composite application and
it automatically provides the integration of the back-end system by storing
the purchase order�with the relevant data mappings to cater to data type
heterogeneity�in the respective back-end application systems.
An integration of legacy systems can be realized within an enterprise service.
This allows using enterprise services at a higher level of granularity, so
that integration work can actually be reused in multiple composite applications.
Process Support Without Workflow Systems:
Not all environments ask for a workflow management system. In cases where
no changes to the process structure are envisioned, a coding of the process
flow can be an attractive and adequate choice.
In database administration there are predefined procedures that are enacted
following a process model. Similar developments can be found in publishing
environments where print workflow is a common tool to describe and
perform the steps that lead to publishable results. Most enterprise resource
planning systems feature a dedicated workflow component that allows us to
model new processes and enact them in the system environment. Due to their
close link to particular applications, these systems are also called embedded
workflow management systems.
Business processes are also realized in online shops, such as train reservation
systems or electronic book stores, where steps of an interaction process
are depicted in graphical form. This graphical representation guides the user
in his interaction with the Web site. In a train reservation online shop, for
instance, there are interaction steps for querying train connections, for getting
detailed information on the connections, for selecting connections, for
providing payment information, and for booking and printing the train ticket.
Since this type of interaction process can easily be realized using traditional
Web page design, workflow management systems are not required. However,
these examples show that the business process paradigm is helpful also in
application scenarios that do not require dedicated workflow support.
Enterprise application systems, such as enterprise resource planning systems,
realize literally thousands of business processes. These processes can be
customized to fit the particular needs of the company that runs the system. In
most cases, the business processes are realized within the system, so no integration
issues emerge. If the predefined business processes cannot be tailored
in a way that fits the needs of the company, then integrated process modelling
functionality can be used to model new processes.
no changes to the process structure are envisioned, a coding of the process
flow can be an attractive and adequate choice.
In database administration there are predefined procedures that are enacted
following a process model. Similar developments can be found in publishing
environments where print workflow is a common tool to describe and
perform the steps that lead to publishable results. Most enterprise resource
planning systems feature a dedicated workflow component that allows us to
model new processes and enact them in the system environment. Due to their
close link to particular applications, these systems are also called embedded
workflow management systems.
Business processes are also realized in online shops, such as train reservation
systems or electronic book stores, where steps of an interaction process
are depicted in graphical form. This graphical representation guides the user
in his interaction with the Web site. In a train reservation online shop, for
instance, there are interaction steps for querying train connections, for getting
detailed information on the connections, for selecting connections, for
providing payment information, and for booking and printing the train ticket.
Since this type of interaction process can easily be realized using traditional
Web page design, workflow management systems are not required. However,
these examples show that the business process paradigm is helpful also in
application scenarios that do not require dedicated workflow support.
Enterprise application systems, such as enterprise resource planning systems,
realize literally thousands of business processes. These processes can be
customized to fit the particular needs of the company that runs the system. In
most cases, the business processes are realized within the system, so no integration
issues emerge. If the predefined business processes cannot be tailored
in a way that fits the needs of the company, then integrated process modelling
functionality can be used to model new processes.
From Business Functions to Business Processes
To provide a more detailed view,
these top-level business functions are broken down to functions of smaller
granularity and, ultimately, to activities of operational business processes.
Functional decomposition is the technique of choice. where a value system represents t
he highest level of aggregation. Each value
system consists of a number of value chains
The functional decomposition of the value chain of enterprise E is exemplified
for one particular path of functions in the marketing and sales top-level
business function. Among many other functions, marketing and sales includes
a business function, OrderManagement, that contains functions related to the
management of incoming orders. Order management is decomposed further
into business functions for getting and checking orders. To check orders, they
need to be analyzed, and there are functions for simple and advanced checking
of orders. Traditionally, functional decomposition was used to describe enterprises
based on the functions they perform. As discussed in Chapter 1, concentrating
on the functions an enterprise performs and neglecting their interplay falls
short of properly representing how enterprises work. Therefore, functional
decomposition is used as first step in the representation of enterprises based
on business processes. Operational business processes relate activities to each other by introducing
execution constraints between them. In principle, relating functions to
business processes can be applied for different granularities of business functions.
In case high-level business functions are considered, a textual specification
of the process is used, since concrete execution constraints between their
constituents are not relevant in coarse-grained business functions.
Consider, for instance, the business functions incoming logistics and operations.
At this very coarse level of functionality, no ordering of these business
functions is feasible: both business functions are performed concurrently, and
only at a lower level of granularity does a concrete ordering make sense.
For instance, when the operations business function orders additional material,
then there are concrete activities that have a concrete ordering. Within
operations, an internal order is created and sent to incoming logistics. On arrival
of this order, raw material is provided to operations. In case no raw material
is available at the manufacturing company, an external order is created
and sent to a supplier of the raw material. Therefore, business processes relate
fine-grained business functions, typically the leaves of the business function
decomposition tree.
these top-level business functions are broken down to functions of smaller
granularity and, ultimately, to activities of operational business processes.
Functional decomposition is the technique of choice. where a value system represents t
he highest level of aggregation. Each value
system consists of a number of value chains
The functional decomposition of the value chain of enterprise E is exemplified
for one particular path of functions in the marketing and sales top-level
business function. Among many other functions, marketing and sales includes
a business function, OrderManagement, that contains functions related to the
management of incoming orders. Order management is decomposed further
into business functions for getting and checking orders. To check orders, they
need to be analyzed, and there are functions for simple and advanced checking
of orders. Traditionally, functional decomposition was used to describe enterprises
based on the functions they perform. As discussed in Chapter 1, concentrating
on the functions an enterprise performs and neglecting their interplay falls
short of properly representing how enterprises work. Therefore, functional
decomposition is used as first step in the representation of enterprises based
on business processes. Operational business processes relate activities to each other by introducing
execution constraints between them. In principle, relating functions to
business processes can be applied for different granularities of business functions.
In case high-level business functions are considered, a textual specification
of the process is used, since concrete execution constraints between their
constituents are not relevant in coarse-grained business functions.
Consider, for instance, the business functions incoming logistics and operations.
At this very coarse level of functionality, no ordering of these business
functions is feasible: both business functions are performed concurrently, and
only at a lower level of granularity does a concrete ordering make sense.
For instance, when the operations business function orders additional material,
then there are concrete activities that have a concrete ordering. Within
operations, an internal order is created and sent to incoming logistics. On arrival
of this order, raw material is provided to operations. In case no raw material
is available at the manufacturing company, an external order is created
and sent to a supplier of the raw material. Therefore, business processes relate
fine-grained business functions, typically the leaves of the business function
decomposition tree.
Conceptual Model and Terminology in business process modelling foundation:
The business process modelling space as laid out in Chapters 1 and 2 is organized
using conceptual models. Figure 3.1 introduces a model of the concepts
at the core of business process management. While the terms mentioned have
been used in the previous chapters informally, the concepts behind these terms
and their relationships will now be discussed in more detail, using conceptual
models. These models are expressed in the Unified Modeling Language, an
object-oriented modelling and design language.
Business processes consist of activities whose coordinated execution realizes
some business goal. These activities can be system activities, user interaction
activities, or manual activities. Manual activities are not supported by
information systems. An example of a manual activity is sending a parcel to
a business partner.
User interaction activities go a step further: these are activities that knowledge
workers perform, using information systems. There is no physical activity
involved. An example of a human interaction activity is entering data on an
insurance claim in a call centre environment. Since humans use information
systems to perform these activities, applications with appropriate user interfaces
need to be in place to allow effective work. These applications need to
be connected to back-end application systems that store the entered data and
make it available for future use.
Some activities that are conducted during the enactment of a business
process are of manual nature, but state changes are entered in a
business process management system by means of user interaction activities. For instance,
the delivery of a parcel can be monitored by an information system.
Typically, the actual delivery of a parcel is acknowledged by the recipient
with her signature. The actual delivery is important information in logistics
business processes that needs to be represented properly by information systems.
There are several types of events during a logistics process. These events
are often available to the user as tracking information. While the activities
are of manual nature, an information system�the tracking system�receives
information on the current status of the process.
System activities do not involve a human user; they are executed by information
systems. An example of a system activity is retrieving stock information
from a stock broker application or checking the balance of a bank
account. It is assumed that the actual parameters required for the invocation
are available. If a human user provides this information, then it is a user
interaction activity. Both types of activities require access to the respective
software systems.
Certain parts of a business process can be enacted by workflow technology.
A workflow management system can make sure that the activities of a business
process are performed in the order specified, and that the information systems
are invoked to realize the business functionality. This relationship between
business processes and workflows is represented by an association between
the respective classes. We argue that workflow is not a subclass of business
process, since a workflow realizes a part of a business process, so a workflow
is not in an �is-a� relationship with a business process, but is an association.
With regard to the types of activities mentioned, system activities are associated
with workflows, since system activities can participate in any kind.
using conceptual models. Figure 3.1 introduces a model of the concepts
at the core of business process management. While the terms mentioned have
been used in the previous chapters informally, the concepts behind these terms
and their relationships will now be discussed in more detail, using conceptual
models. These models are expressed in the Unified Modeling Language, an
object-oriented modelling and design language.
Business processes consist of activities whose coordinated execution realizes
some business goal. These activities can be system activities, user interaction
activities, or manual activities. Manual activities are not supported by
information systems. An example of a manual activity is sending a parcel to
a business partner.
User interaction activities go a step further: these are activities that knowledge
workers perform, using information systems. There is no physical activity
involved. An example of a human interaction activity is entering data on an
insurance claim in a call centre environment. Since humans use information
systems to perform these activities, applications with appropriate user interfaces
need to be in place to allow effective work. These applications need to
be connected to back-end application systems that store the entered data and
make it available for future use.
Some activities that are conducted during the enactment of a business
process are of manual nature, but state changes are entered in a
business process management system by means of user interaction activities. For instance,
the delivery of a parcel can be monitored by an information system.
Typically, the actual delivery of a parcel is acknowledged by the recipient
with her signature. The actual delivery is important information in logistics
business processes that needs to be represented properly by information systems.
There are several types of events during a logistics process. These events
are often available to the user as tracking information. While the activities
are of manual nature, an information system�the tracking system�receives
information on the current status of the process.
System activities do not involve a human user; they are executed by information
systems. An example of a system activity is retrieving stock information
from a stock broker application or checking the balance of a bank
account. It is assumed that the actual parameters required for the invocation
are available. If a human user provides this information, then it is a user
interaction activity. Both types of activities require access to the respective
software systems.
Certain parts of a business process can be enacted by workflow technology.
A workflow management system can make sure that the activities of a business
process are performed in the order specified, and that the information systems
are invoked to realize the business functionality. This relationship between
business processes and workflows is represented by an association between
the respective classes. We argue that workflow is not a subclass of business
process, since a workflow realizes a part of a business process, so a workflow
is not in an �is-a� relationship with a business process, but is an association.
With regard to the types of activities mentioned, system activities are associated
with workflows, since system activities can participate in any kind.
Modelling Process Data in business Business Process Modelling Foundation:
Data modelling is at the core of database design. The Entity Relationship
approach is used to classify and organize data in a given application domain.
The Entity Relationship modelling approach belongs to the metamodel level,
as depicted in because it provides the required concepts to express
data models. Data modelling will be illustrated by a sample application
domain, namely by order management.
In a modelling effort, the most important entities are identified and classified.
Entities are identifiable things or concepts in the real world that are
important to the modelling goal. In the sample scenario, orders, customers,
and products are among the entities of the real world that need to be represented
in the data model.
Entities are classified as entity types if they have the same or similar
properties. Therefore, orders are classified by an entity type called Orders.
Since each order has an order number, a date, a quantity, and an amount, all
order entities can be represented by this entity type. Properties of entities are
represented by attributes of the respective entity types.
The entities classified in an entity type need to have similar, but not identical
structure, because attributes can be optional. If the application domain
allows, for instance, for an order to have or not to have a discount, then the
amount attribute is optional. This means that two orders are classified in
entity type order even if one has a discount attribute while another does not.
Entity types in the Entity Relationship metamodel need to be represented
in a notation by a particular symbol. While there are variants of Entity Relationship
notations, entity types are often represented by rectangles, marked
with the name of the entity type. Figure 3.24 shows an entity type Orders at
the centre of the diagram. Other entity types in the sample application domain
are customers and products. The attributes are represented as ellipsoids
attached to entity types.
Entities are associated with each other by relationships. For instance, a
customer �Miller� requests an order with the order number 42. These types of
links between entities are called relationships.
Just as there are many customer entities and many order entities,
there are many customer-order relationships.
To represent these relationships, a relationship type requests classifies them
all. In Entity Relationship diagrams, relationship types are typically represented
by diamond symbols, connected to the respective entity types by edges.
The complex nature of data in a given application domain can be well
represented by Entity Relationship Diagrams. These diagrams can be used to
create relational database tables, using transformation rules. Once the respective
database tables have been created in a relational database, application
data can be stored persistently. The data can be retrieved efficiently using
declarative query languages, for instance Structured Query Language.
While this discussion focuses on data modelling in the context of database
applications, the same data modelling method can be used to represent data
structures in business process management. Based on these data structures,
data dependencies between activities in business processes can be captured
precisely.
Data modelling is also the basis for the integration of heterogeneous data.
In the enterprise application integration scenarios discussed above, one of the
main issues was the integration of data from heterogeneous data sources. Once
data models are available for these data sources, the data integration problem
can be addressed. There are advanced data integration techniques that also
take into account data at the instance level, but explicit data models in general
are essential to addressing data integration.
Data integration can then be realized by a mapping between the data
types. For instance, there might be applications on top of database systems A
and B, such that these systems have tables CustomerA and CustomerB, respectively,
that differ. For instance, while CName is the attribute of the CustomerA
table, referring to the name of the customer, CustN might be the respective
attribute in the CustomerB table. In order to integrate both tables, the attributes
need to be mapped. In this case, CustomerA.CName is mapped to
CustomerB.CustN.
approach is used to classify and organize data in a given application domain.
The Entity Relationship modelling approach belongs to the metamodel level,
as depicted in because it provides the required concepts to express
data models. Data modelling will be illustrated by a sample application
domain, namely by order management.
In a modelling effort, the most important entities are identified and classified.
Entities are identifiable things or concepts in the real world that are
important to the modelling goal. In the sample scenario, orders, customers,
and products are among the entities of the real world that need to be represented
in the data model.
Entities are classified as entity types if they have the same or similar
properties. Therefore, orders are classified by an entity type called Orders.
Since each order has an order number, a date, a quantity, and an amount, all
order entities can be represented by this entity type. Properties of entities are
represented by attributes of the respective entity types.
The entities classified in an entity type need to have similar, but not identical
structure, because attributes can be optional. If the application domain
allows, for instance, for an order to have or not to have a discount, then the
amount attribute is optional. This means that two orders are classified in
entity type order even if one has a discount attribute while another does not.
Entity types in the Entity Relationship metamodel need to be represented
in a notation by a particular symbol. While there are variants of Entity Relationship
notations, entity types are often represented by rectangles, marked
with the name of the entity type. Figure 3.24 shows an entity type Orders at
the centre of the diagram. Other entity types in the sample application domain
are customers and products. The attributes are represented as ellipsoids
attached to entity types.
Entities are associated with each other by relationships. For instance, a
customer �Miller� requests an order with the order number 42. These types of
links between entities are called relationships.
Just as there are many customer entities and many order entities,
there are many customer-order relationships.
To represent these relationships, a relationship type requests classifies them
all. In Entity Relationship diagrams, relationship types are typically represented
by diamond symbols, connected to the respective entity types by edges.
The complex nature of data in a given application domain can be well
represented by Entity Relationship Diagrams. These diagrams can be used to
create relational database tables, using transformation rules. Once the respective
database tables have been created in a relational database, application
data can be stored persistently. The data can be retrieved efficiently using
declarative query languages, for instance Structured Query Language.
While this discussion focuses on data modelling in the context of database
applications, the same data modelling method can be used to represent data
structures in business process management. Based on these data structures,
data dependencies between activities in business processes can be captured
precisely.
Data modelling is also the basis for the integration of heterogeneous data.
In the enterprise application integration scenarios discussed above, one of the
main issues was the integration of data from heterogeneous data sources. Once
data models are available for these data sources, the data integration problem
can be addressed. There are advanced data integration techniques that also
take into account data at the instance level, but explicit data models in general
are essential to addressing data integration.
Data integration can then be realized by a mapping between the data
types. For instance, there might be applications on top of database systems A
and B, such that these systems have tables CustomerA and CustomerB, respectively,
that differ. For instance, while CName is the attribute of the CustomerA
table, referring to the name of the customer, CustN might be the respective
attribute in the CustomerB table. In order to integrate both tables, the attributes
need to be mapped. In this case, CustomerA.CName is mapped to
CustomerB.CustN.
Modelling Operation in Business Process Modelling Foundation:
While business process management organizes the work that a company performs
by focusing on organizational and functional aspects, the realization
of business process activities also needs to be taken into account. Activities
can be distinguished depending on the level of software system support. The
terms system workflows and human interaction workflows were introduced to
characterize the different kinds of business process enactment.
A classification of activities in business processes was introduced in Figure
3.1, consisting of system activities, user interaction activities, and manual
activities. To recapitulate, system activities are performed by software systems
without user interaction, user interaction activities require the involvement of
human users and manual activities do not involve the use of information systems.
During the enactment of human interaction workflows, knowledge workers
perform activity instances. When a knowledge worker starts working on a specific activity,
the respective application program is started, and the input
data as specified in the process model is transferred to that application program.
When the knowledge worker completes that activity, the output data
generated is collected in the output parameters. These parameter values can
then be stored in the application program. They can also be transferred by
the business process management system to the next activity, as specified in
the business process model.
Business process modelling aims at mapping high-level and domain-specific
features of the application process; the technical details�the main components
of the operational perspective�are taken into account in the configuration
phase of the business process management lifecycle. The heterogeneous
nature of information technology landscapes led to various kinds of interface
definitions, most of which did not prove to be compatible. With the advent of
service-oriented computing, the operational aspects of business processes are
represented by services, providing the required uniformity.
This section discusses how activities realized by software functionality can
be modelled. Conceptually, the same levels of abstraction apply to modelling
the operational perspective as to modelling the other perspectives: at the
metamodel level, interface definition languages reside. They describe specific
interface definitions at the model level. At the instance level executing software
code is categorized.
This approach fits the modelling of activity instances (and, therefore, also
to process instances) well, because activity instances can be realized by executing
software code. It also fits the organizational perspective in which persons
reside at the instance level. Persons are�at least in human interaction
workflows�responsible for performing activity instances.
In order to automatically invoke this software functionality, business
process management systems require concepts and technology to access these
systems. The operational perspective of business process modelling provides
the information that equips a business process management system with information
required to invoke the functionality of external application systems.
The operational perspective includes the invocation environment of application
programs, the definition of the input and output parameters of the
application program, and their mapping to input and output parameters of
business process activities. Therefore, functional requirements need to be detailed
in order for us to evaluate whether a certain software system provides
the required functionality in the context of a business process.
This perspective is not limited to functional requirements. Non-functional
requirements also need to be represented, for instance, security properties
and quality of service properties of the invoked applications or services, such
as execution time and uptime constraints. In service-oriented architectures,
these properties are typically specified in service-level agreements between
collaborating business partners. These service-level agreements are part of a
legal contract that the parties sign.
by focusing on organizational and functional aspects, the realization
of business process activities also needs to be taken into account. Activities
can be distinguished depending on the level of software system support. The
terms system workflows and human interaction workflows were introduced to
characterize the different kinds of business process enactment.
A classification of activities in business processes was introduced in Figure
3.1, consisting of system activities, user interaction activities, and manual
activities. To recapitulate, system activities are performed by software systems
without user interaction, user interaction activities require the involvement of
human users and manual activities do not involve the use of information systems.
During the enactment of human interaction workflows, knowledge workers
perform activity instances. When a knowledge worker starts working on a specific activity,
the respective application program is started, and the input
data as specified in the process model is transferred to that application program.
When the knowledge worker completes that activity, the output data
generated is collected in the output parameters. These parameter values can
then be stored in the application program. They can also be transferred by
the business process management system to the next activity, as specified in
the business process model.
Business process modelling aims at mapping high-level and domain-specific
features of the application process; the technical details�the main components
of the operational perspective�are taken into account in the configuration
phase of the business process management lifecycle. The heterogeneous
nature of information technology landscapes led to various kinds of interface
definitions, most of which did not prove to be compatible. With the advent of
service-oriented computing, the operational aspects of business processes are
represented by services, providing the required uniformity.
This section discusses how activities realized by software functionality can
be modelled. Conceptually, the same levels of abstraction apply to modelling
the operational perspective as to modelling the other perspectives: at the
metamodel level, interface definition languages reside. They describe specific
interface definitions at the model level. At the instance level executing software
code is categorized.
This approach fits the modelling of activity instances (and, therefore, also
to process instances) well, because activity instances can be realized by executing
software code. It also fits the organizational perspective in which persons
reside at the instance level. Persons are�at least in human interaction
workflows�responsible for performing activity instances.
In order to automatically invoke this software functionality, business
process management systems require concepts and technology to access these
systems. The operational perspective of business process modelling provides
the information that equips a business process management system with information
required to invoke the functionality of external application systems.
The operational perspective includes the invocation environment of application
programs, the definition of the input and output parameters of the
application program, and their mapping to input and output parameters of
business process activities. Therefore, functional requirements need to be detailed
in order for us to evaluate whether a certain software system provides
the required functionality in the context of a business process.
This perspective is not limited to functional requirements. Non-functional
requirements also need to be represented, for instance, security properties
and quality of service properties of the invoked applications or services, such
as execution time and uptime constraints. In service-oriented architectures,
these properties are typically specified in service-level agreements between
collaborating business partners. These service-level agreements are part of a
legal contract that the parties sign.
Subscribe to:
Comments (Atom)