Normalized Systems

Normalized Systems

Normalized Systems is a theory to design and engineer information systems exhibiting proven evolvability. Originally established at the University of Antwerp, at the department Management Information Systems of the faculty Applied Economics, it aims at re-creating information technology based on laws for software evolvability.

Contents

Introduction

There exist severe problems in Information Technology today. Still many IT projects are reported as going over time, over budget, or not satisfying required specifications while contemporary organizations need to be more agile to keep up with the swiftly changing business environment. Some say that the same functionality seems to be built over and over again, in slightly different ways.[1] Manny Lehman's law of Increasing Complexity captures this reality stating that[2]:

"As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it."

Manny Lehman, 1980

This law implies that the addition of new functionality to existing information systems becomes more complex- and therefore costly- over time. Indeed, software maintenance is considered to be the most expensive phase of the information system's life cycle, and often leads to an increase of architectural complexity and decrease of software quality.[3] This actually resembles a widespread belief amongst practitioners, which is in line with the fact that information technology departments and budgets grow every year.

Furthermore, Lehman argues also that as software becomes more complex, it eventually declines in usefulness. So as software grows in complexity and decreases in value as it does so, vendors have two options if they are going to be successful: increase maintenance fees so that they can deliver new software, or force an upgrade onto the customer.[4]

In Normalized Systems theory its believed that today's IT problems are symptoms of something deeper and more fundamental. The theory is the result of identifying these fundamental principles, patterns and other methodological elements for building evolvable software architectures for enterprise systems. Indeed, the basic assumption of Normalized Systems is that information systems should be able to evolve over time, and should be designed to accommodate change. Normalized Systems principles define the rules according to which software architectures have to be built so that there are no combinatorial explosions in the impacts of predefined changes to the system. In Normalized Systems vision, Douglas McIlroy's dream of constructing information systems based upon rational principles becomes a reality.[5]

"expect families of routines to be constructed on rational principles so that families fit together as building blocks. In short, [the user] should be able safely to regard components as black boxes."

Douglas McIlroy, 1968

The main issue with regards to information systems is dealing with the ever increasing complexity at both the business and technical level. Even if the growing complexity is finally under control, there is always change. Indeed, our technology only comprises static modularity and no evolvable modularity. This transition towards evolvable modularity requires true engineering and determinism to combat change, i.e. applying principles to obtain a predictable and desired result. Eventually, the ultimate goal is mapping requirements to constructs in an invariable manner, incorporating a one-to-one traceability of data and functions, abandoning triviality as well as the reliance on top-quality heuristics, while truly embracing innovation through genuinely designing information systems that accommodate change. The result is high-quality IT using advanced modular structures of proven evolvability that realize McIlroy and withstand Lehman, introducing new levels of reuse and independent of software environment.

In fact, Normalized Systems are a specific way of viewing service-oriented architectures (SOA), which are currently prevalent in academic literature. Indeed, the essence of SOA can be described as a new way of building high-level designs. Unfortunately, there are at this moment very few guidelines or laws on how this should be done, which is a major shortcoming. Normalized Systems principles can be seen as a contribution to solving this problem.

Finally, the objective of the Normalized Systems research is to achieve straight-through processing. This term is used to refer to the tight coupling between a change at the organizational level, which is propagated straight to the architectural and implementation level. Normalized Systems theory integrates previous research by Herwig Mannaert on software architectures and their implementation with Jan Verelst's research on evolvability of conceptual models and design models of information systems.

Systems Development Methodologies

In their foundational book, the authors give an overview of major Information Systems constructs and methodologies that have been proposed in theory and practice. Based on the overview, they derive issues in four domains.

Limited Traceability

Traceability between real-world, the modules in the design and the programming code as to help with tracking for changes and testing and benefit Business/IT alignment is not realized by current methodologies. First of all, the mapping is highly complex for non-trivial systems. But also, the levels that these mappings occur between, are of a different nature. Furthermore, current methodologies do not accurately prescribe how to do this mapping, neither provide post hoc traceability. This makes it difficult for an agile company to align its information systems with its ever changing business context.

Limited Adoption of Methodologies

Some researchers indicate that the adoption of methodologies providing guidance in building modular structures is actually rather limited. For example, Huisman and Iivari write "many organizations claim that they do not use any systems development methods".[6] Riemenschneider et al. write "only about half of all organizations actually follow a methodology".[7] Hence, it is difficult to have a clear view on where we stand on adoption of methodologies. Nevertheless, the authors observe that there are several indications that methodologies are not adopted as widely as academics and researchers had hoped in the past. It is interpreted that this is due to the perceived gap between theory and practice. Methodologies are therefore often used ad hoc, not explicitly but implicitly by building information systems based on heuristics, experience and insight in terms of available and known patterns, constructs, techniques, tools and notations.

Vagueness of design knowledge

Since the early 1970s, a number of design principles were proposed, such as information hiding and the classification of coupling and cohesion in structured design. However, in dealing with them it is observed that there are different opinions about what makes a good design. For instance the concept of "low coupling" can be approached in slightly different ways and Parnas concept of information hiding still needs to be refined. Over the evolution of paradigms, there surely is significant progress, but there is no theoretical framework that is stable. Furthermore, there is often insufficient guidance in order to be widely adopted by practitioners. In this sense, it is understandable that Philippe Kruchten claims "We haven't found the fundamental laws in software like in other engineering disciplines".[8]

Lack of Systematic Application of design knowledge

In some cases, there does exist design knowledge that is almost generally accepted. However, in those cases where patterns, principles or theory do provide concrete guidance, it is observed that guidance is not used all the time and as such there is a limited, unsystematic application of "good" design. At a technical level, it is very difficult, challenging and expensive building-in evolvability towards many anticipated changes. This would necessitate a very fine-grained modular structure, that limits the impact of every anticipated change to a single module. On the other hand, at the management level, there are project management constraints such as time and budget. It is unlikely that every individual developer will feel that vaguely defined goals such as evolvabililty and reuse warrant the extra effort of fine-grained modular structures, when short-term deadlines are looming and it very uncertain is whether an anticipated change will actually ever occur at all.

Stability and Normalized Systems

Systems theoretic stability

The fundamental concept or starting point of Normalized Systems theory is systems theoretic stability, meaning that a bounded input function results in bounded output values for an infinite time. Applying the systems theoretic stability concept to software, demands that a bounded set of changes results in a bounded amount of changes or impacts to the system, even for an infinite time.

Assumption of unlimited system evolution

An unlimited time period, and an unlimited evolution of the system is considered. This means that the system becomes even larger in the sense that the primitives, and the number of dependencies between them, become infinite or unbounded for an infinite time. This assumption is called the assumption of unlimited systems evolution. The concept of stability demands that the amount of impacts caused by a change cannot be related to the size of the system, and therefore remains constant over time as the system grows. In other words, stability demands that the impact of a change is only dependent on the nature of the change itself. Conversely, the authors term changes, causing impacts that are dependent on the nature of the change itself, as well as the size of the system, combinatorial effects. Combinatorial effects should be eliminated from the system in order to attain stability. Indeed, no change propagation effects should be present within an information system, meaning that a specific change to an information system should require the same effort, irrespective of the information system's size or point in time when being applied.

Normalized Systems

The fundamental postulate upon which Normalized Systems theory is based, is the believe that information systems need to be stable with regards to a defined set of anticipated changes. Hence, normalized systems can be defined as information systems that are stable with respect to a defined set of anticipated changes, which requires that a bounded set of those changes results in a bounded amount of impacts to system primitives.

Normalized Design Theorems

A number of design theorems or principles are used for the development of normalized systems, i.e. systems that are stable with respect to a defined set of anticipated changes, circumventing most combinatorial effects. Combinatorial effects are (hidden) coupling or dependencies, increasing with the size of the system. They are due to the way tasks, action entities and data entities are combined or integrated. Since current software constructs allow combinatorial effects any developer is able to violate any principle at any time. Hence, combinatorial effects are omnipresent, during development and ever increasing during maintenance. Normalized System principles identify combinatorial effects at seemingly orthogonal levels. Finally, it is noteworthy that these principles are independent of specific programming, modeling languages and software packages.

Separation of Concerns

This theorem expresses the need for the separation of all tasks, in order to obtain, in more general terms, Separation of Concerns. It allows for the isolation of the impact of each change driver. Essentially, the principle describes the required transition of submodular tasks, as identified by the designer, into actions at the modular level. This idea—later called design for change—was already described by Parnas in 1972.[9] Applying the principle prescribes that each module can contain only one submodular task (which is defined as a change driver), but also that workflow should be separated from functional submodular tasks. Manifestations of this principle include multi-tier architectures, external workflow systems, separating cross-cutting concerns and use of enterprise messaging/service/integration bus.

Data Version Transparency

Data Version Transparency implies that data should be communicated in version transparent ways between components. This requires that this data can be changed (e.g., the mere addition of a field that is not currently used), without having an impact on the components and their interfaces. The theorem express the need for the encapsulation of data entities, in order to wrap the various versions of the data entity and to obtain Data Version Transparency.

Action Version Transparency

Action Version Transparency implies that a component can be upgraded without impacting the calling components. In other words, the mere addition of a new version of a component's task, should not affect the component calling the action entity containing the task. The theorem expresses the need to the encapsulation of action entities, in order to wrap the various action entity and task version in order to obtain Action Version Transparency. This principle can be supported in nearly any technology environment by for example polymorphism or a facade pattern.

Separation of States

Separation of States implies that actions or steps in a workflow should be separated from each other in time by keeping state after every action or step. This suggests an asynchronous and stateful way of calling components. Synchronous calls resulting in pipelines of objects calling other objects which are typical for object-oriented development result in combinatorial effects. Therefore, the theorem expresses the need for the definition of action states, in order to isolate atomic tasks and to obtain Separation of States.

Normalized Systems Elements

Normalized Systems design principles show that software constructs, such as functions and classes, by themselves offer no mechanisms to accommodate anticipated changes in a stable manner. The Normalized Systems approach therefore proposes to encapsulate software constructs in a set of five higher-level software elements, which are considered the building blocks of the flexible software architecture. These elements are modular structures that adhere to these design theorems in order to provide the required stability with respect to anticipated changes. Furthermore, the patterns form a constructive proof that Normalized Systems, containing common basic functionality of enterprise systems, can actually be built in practice. These design patterns describe the internal structure of primitives. Primitives are the encapsulations of software entities, also called elements, which form the structure and core functionality of an advanced model of an information system, independent of any specific technology environment. The advanced model can be expressed in the following primitives:

Data Element

A data element represents an encapsulated data construct with get- an set-methods to provide access to their information in a data version transparent way. So-called cross-cutting concerns for instance access control and persistency, should be added to the element in separate constructs.

Action Element

An action element contains a core action representing one and only one functional task. Arguments and parameters need to be encapsulated as separate data elements, since they perform a certain operation on data elements, and therefore receive input and output in terms of data elements. While action elements are built around a single specific or functional task, all elements may contain supporting or non-functional tasks like logging, which are called cross-cutting concerns. These should be again added as separate constructs. Normalized Systems theory distinguishes between four implementation of an action element: standard actions, manual actions, bridge actions and external actions. In a standard action, the actual task is programmed in the action element and performed by the same information system. In a manual action, a human act is required to fulfill the task. The user then has to set the state of the life cycle data element through a user interface, after the completion of the task. A process step can also require more complex behavior. A single task in a workflow can be required to take care of other aspects, which are not the concern of that particular flow. Bridge actions create these other data elements going through their designated flow. Fourth, when an existing, external application is already in use to perform the actions on, for instance, the different parts of an assembly, the action element would be implemented as an external action. These actions call other information systems and set their end state depending on the external systems' reported answer.

Workflow Element

Based upon the first and fourth theorem, workflow has to be separated from other action elements. These action elements must be isolated by intermediate states, and information systems have to react to states. A workflow element contains the sequence in which a number of action elements should be executed in order to fulfill a flow. A consequence of the stateful workflow elements is that state is required for every instance of use of an action element, and that the state therefore needs to be linked or be part of the instance of the data element serving as argument. We call this data element the life cycle data element of a flow.

Connector Element

This element ensures that external systems can interact with data elements without allowing an action element to be called in a stateless way.

Trigger Element

A trigger element controls the states (both regular and error states) and checks whether an action element has to be triggered.

Set of Anticipated Changes

Evolvability is operationalized as a number of anticipated changes that occur to software systems during their life-cycle. In terms of an advanced information systems model, Normalized Systems theory defines the following set of anticipated changes:

  • An additional data field.
  • An additional data element.
  • An additional action element, which may imply:
    • an action element having a specific data element as input, or producing it as output.
  • An additional version of the functional task of an action element, or of any supporting task; which may imply:
    • The use of an addition external technology.
    • The mandatory upgrade of the version.
  • An addition action in a workflow element.
  • An additional connector element.
  • An additional trigger element.

The anticipated changes guarantee that new versions can be added for all primitives, and therefore seem complete in this sense. Furthermore, the information systems needs to be stable under the conditions specified by the assumption of unlimited systems evolution. This implies that the number of all elements and their dependencies become unbounded, including:

  • The number of action elements receiving a specific data element as input, and the number producing it as output.
  • The number of workflow elements calling an action element.
  • The number of versions of a specific functional task.
  • The number of versions of a specific supporting task.

Proposed solution

  • A Java class is encapsulated in 8-10 other classes, dealing with cross-cutting concerns, in order to deal with the anticipated changes without combinatorial effects, and fully separating the element from all other elements.
  • Every element is described by a "detailed design pattern" and every element builds on other elements.
  • Every design pattern is executable, and can be expanded automatically.
  • A Normalized Systems application is the same as n instances of the elements.

Characteristics

  • The proposed elements offer ex ante proven evolvable modularity with respect to a defined set of anticipated changes in packages, frameworks, programming languages et cetera. As a result, a bounded input function results in bounded output values, resulting in an infinite and controlled evolution of information systems.
  • The evolvable modularity is realized through an extremely fine-grained modular structure, following the rigorous and systematic application of Normalized Systems principles. However, this is not the same as an advanced version of code generation.
  • The systematic elimination of combinatorial effects, using fine-grained modular structures such as elements, while controlling their inherent complexity, leads to determinism. Since all applications have a similar fine-grained software architecture, this paves the way to production lines or product factories for business processes, impact analysis, correctness, reliability and performance, traceable execution et cetera.
  • Since the inside of an instantiation of the element is "known", it is a true black box and therefore does not require further inspection by the user. As such, one can safely regard components as black boxes and reuse them as building blocks that fit together.[10]
  • The design patterns are detailed, unambiguous, and parametrized. Therefore both unit- and integration testing of such a stable building block should become a trival thing. The complete and unambiguous documentation of the building block should consist of the documentation of this design pattern and the expansion parameters.

Ongoing Research

Ongoing design research deals with extending the Normalized Systems approach to the related fields of Enterprise Architecture (EA) and Business Process Management (BPM) because designing an enterprise requires to view it in its overall context. Through incorporating determinism in the construction of an organization's artifacts, it could increase traceability from the organization levels to the information systems.[11] Other research focuses on the organization ability and tries to combine Enterprise Ontology with Normalized Systems.[12][13] More specifically, it explores the expression of the transaction pattern — a core Enterprise Ontology construct — in Normalized Systems elements.[14] Finally, there is ongoing research what Normalized Systems means in terms of enterprise and management, and its implications with respect to competences.

See also

References

  1. ^ Mannaert, Herwig; Verelst, Jan (2009). Normalized Systems: Re-creating Information Technology Based on Laws for Software Evolvability. Koppa. ISBN 978-90-77160-008. http://www.koppa.be/?page_id=11. Retrieved October 27, 2011. 
  2. ^ Lehman, Meir M. (September 1980), "The Law of Increasing Complexity", Proceedings of the IEEE, 68, pp. 1068 
  3. ^ Eick, Stephen G.; Graves, Todd L.; Marron, J. S.; Mockus, Audris (January 2001). "Does Code Decay? Assessing the Evidence from Change Management Data". IEEE Transactions on Software Engineering (Piscataway, NJ, USA: IEEE Press) 27 (1): 1–12. ISSN 0098-5589. http://portal.acm.org/citation.cfm?id=359558. 
  4. ^ Organized Robbery
  5. ^ McIlroy, M. Douglas (1968), "Mass produced software components", NATO Software Eng. Conf., Garmisch, Germany, pp. 138–155, http://www.cs.dartmouth.edu/~doug/components.txt 
  6. ^ Huisman, Magda; IIvari, Juhani (1). B.P. et al., A.. ed. The individual deployment of systems development methodologies. Lecture Notes in Computer Science. 2348. Springer Berlin / Heidelberg. pp. 134–150. doi:10.1007/3-540-47961-9_12. ISBN 978-3-540-43738-3. http://www.springerlink.com/content/nyht7k5mm17qhapt/. Retrieved July 20, 2010. 
  7. ^ Riemenschneider, Cynthia K.; Bill C. Hardgrave and Fred D. Davis (December, 2002). "Explaining Software Developer Acceptance of Methodologies: A Comparison of Five Theoretical Models". IEEE Transactions on Software Engineering (IEEE Press) 28 (12): 1135–1145. ISSN 0098-5589. http://portal.acm.org/citation.cfm?id=631304. Retrieved July 20, 2010. 
  8. ^ Kruchten, Philippe (March/April, 2005). Editor's Introduction: Software Design in a Postmodern Era. 22. IEEE Press. pp. 16–18. doi:10.1109/MS.2005.38. http://www.computer.org/portal/web/csdl/doi/10.1109/MS.2005.38. Retrieved July 20, 2010. 
  9. ^ Parnas, D.L. (December 1972). "On the criteria to be used in decomposing systems into modules". Communications of the ACM (New York, NY, USA: ACM) 15 (12): 1053–1058. ISSN 0001-0782. http://portal.acm.org/citation.cfm?id=361623. 
  10. ^ M.D., McIlroy (7-11). Naur, P.. ed. Mass produced software components. NATO Conference on Software Engineering. B. Randell. Garmish, Germany: Scientific Affairs Division. pp. 138–155. http://www.cs.dartmouth.edu/~doug/components.txt. Retrieved July 20, 2010. 
  11. ^ Van Nuffel, Dieter; Huysmans, Philip; Bellens, David; Ven, Kris (June 4–5, 2010). "Towards Deterministically Constructing Organizations Based on the Normalized Systems Approach". In Winter, R.; Zhao, J.L.; Aier, S.. Lecture Notes in Computer Science: Global Perspectives on Design Science Research, Proceedings of the 5th International Conference on Design Science Research in Information Systems and Technology (DESRIST 2010). 6105. St. Gallen, Switzerland: Springer-Verlag Berlin / Heidelberg. pp. 242–257. doi:10.1007/978-3-642-13335-0_17. 
  12. ^ Krouwel, Marien R. (2010). "Towards the agile enterprise: A method to come from a DEMO model to a Normalized System, applied to Government Subsidy Schemes" (MSc-thesis). Delft University of Technology. http://repository.tudelft.nl/view/ir/uuid%3Aa170e23f-9fee-45fd-b99b-3a85e4d551cf/. 
  13. ^ Krouwel, Marien R.; Op 't Land, Martin (May 16-17, 2011). "Combining DEMO and Normalized Systems for Developing Agile Enterprise Information Systems". In Albani, Antonia; Dietz, Jan L.G.; Verelst, Jan. Lecture Notes in Business Information Processing: Advances in Enterprise Engineering V, Proceedings of the First Enterprise Engineering Working Conference (EEWC 2011). 79. Antwerp, Belgium: Springer Berlin Heidelberg. pp. 31-45. doi:10.1007/978-3-642-21058-7_3. ISBN 978-3-642-21058-7. http://dx.doi.org/10.1007/978-3-642-21058-7_3. Retrieved October 27, 2011. 
  14. ^ Philip, Huysmans; Bellens, David; Van Nuffel, Dieter; Ven, Kris (2010), "Aligning the constructs of enterprise ontology and normalized systems", in Albani, Antonio; Dietz, Jan L. G., Lecture Notes in Business Information Processing: 6th International Workshop, CIAO! 2010 (DESRIST 2010), 49, St. Gallen, Switzerland: Springer, pp. 1–15, ISBN 978-3-642-13047-2, http://www.springer.com/business+%26+management/business+information+systems/book/978-3-642-13047-2?cm_mmc=Google-_-Book%20Search-_-Springer-_-0 

Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Normalized Difference Vegetation Index — Negative values of NDVI (values approaching 1) correspond to water. Values close to zero ( 0.1 to 0.1) generally correspond to barren areas of rock, sand, or snow. Lastly, low, positive values represent shrub and grassland (approximately 0.2 to 0 …   Wikipedia

  • Normalized Differenced Vegetation Index — Der globale NDVI in den 12 Monaten des Jahres 1998. Eigene Animationen lassen sich unter http://earthobservatory.nasa.gov/Observatory/Datasets/ndvi.fasir.html erstellen. NDVI ist ein Akronym und steht für Normalized Difference Vegetation Index… …   Deutsch Wikipedia

  • Database normalization — In the design of a relational database management system (RDBMS), the process of organizing data to minimize redundancy is called normalization. The goal of database normalization is to decompose relations with anomalies in order to produce… …   Wikipedia

  • Data warehouse — Overview In computing, a data warehouse (DW) is a database used for reporting and analysis. The data stored in the warehouse is uploaded from the operational systems. The data may pass through an operational data store for additional operations… …   Wikipedia

  • Orthogonal coordinates — In mathematics, orthogonal coordinates are defined as a set of d coordinates q = (q1, q2, ..., qd) in which the coordinate surfaces all meet at right angles (note: superscripts are indices, not exponents). A coordinate surface for a particular… …   Wikipedia

  • Natural units — In physics, natural units are physical units of measurement based only on universal physical constants. For example the elementary charge e is a natural unit of electric charge, or the speed of light c is a natural unit of speed. A purely natural …   Wikipedia

  • Curvilinear coordinates — Curvilinear, affine, and Cartesian coordinates in two dimensional space Curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian… …   Wikipedia

  • Nondimensionalization — is the partial or full removal of units from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved. It is closely related to… …   Wikipedia

  • Essential matrix — In computer vision, the essential matrix is a 3 imes 3 matrix mathbf{E} , with some additional properties, which relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model.FunctionMore specifically,… …   Wikipedia

  • Floating point — In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”