THE PRACTICALITY OF ENGINEERING PRINCIPLES IN SOFTWARE ENGINEERING

using design methodologies. In this way, the study also closes the knowledge gap as to why the name software engineering.

In the early days of computing, software development was all about automating complex scientific calculations requiring several iterations. But with advances in technology resulting in the drastic reduction in the cost of computers, computer programs were designed to automate enterprise activities that required storing large data and developing complex codes to process the same. Unlike the structured scientific problems that computers solved earlier, the application of computers in automating enterprise activities involved problems that were not as structured as in the sciences. The users of enterprise applications were also not technically skilled as scientists. This increased the complexity of software development projects. The complexity associated with software development increased to a level that necessitated the adoption of principles that have been applied in the traditional engineering fields to solve complex problems such as building skyscrapers, highways, electrically wiring a large building among others. Previous studies have documented and categorized the engineering principles that apply to software engineering. This paper goes a step further to illustrate the practical applications of these principles to the software development process using design methodologies. In this way, the study also closes the knowledge gap as to why the name software engineering.

…………………………………………………………………………………………………….... Introduction:-
Software development started as a process whereby computers were programmed to automate complex scientific calculations that required several iterations. The iterations involved in such computations were beyond the manual methods. The user community then was made up of science-oriented professionals who could easily cope with the complexities of the computer programs that had to be written. With advances in technology resulting in the drastic reduction in the cost of computers, computer programs were designed to automate enterprise activities that required storing large data and implementing complex algorithms to process the same. In a University, for example, an enterprise application can be deployed to automate both academic and administrative activities. Unlike the structured problems that computers were used to solve by the scientists, the application of computers in automating enterprise activities involved problems that were not as structured as in the sciences and this increased the complexity of software development projects.

Review of Related Works:
The field of Software Engineering (SE) has evolved since the 1960s and has therefore amassed a substantial amount of knowledge (Picha and Brada, 2014). This study focuses on the practical application of engineering principles and certainly does not constitute the first attempt in the research community to assess the state and context of Engineering Principles (EP) in Software Engineering. Fitzgerald and Stol (2015) attempted to provide a roadmap of the overall territory, an important step in its way, since there is much confusion as terms are used interchangeably and synonymously without rigorous definition, similar to early research on agile methods. The study identified some continuous activities tagged together as "Continuous Star" which they presented as part of an overall roadmap for continuous software engineering. However, delivering the continuous star agenda rather highlighted many significant challenges which need to be addressed if the concept is to be successful. They, however, failed to provide a clear view of terms in software engineering principles. Pícha and Brada (2014) summarized the goals and history of the SE field and focused on the empirical research area within it. Their research highlighted the most frequent problems affecting empirical SE research efforts and the most promising suggested solutions. Both the problems and suggested solutions were collected from a carefully selected sample of research publications. However, the study did not cover the full breadth of the studied material and findings. Abran et al., (2004) presented a survey and an analysis of the work carried out beginning from the early 70s on the search for fundamental principles of software engineering in terms of both methodology used and the status of results to date. The paper also illustrated that there is a clear lack of consensus about which of the proposed principles are indeed fundamental. However, the paper failed to establish the foundation of SE as intended. Krishna et al., (2012) explored the set of fundamental principles and these contributed to a better understanding, possibly because the discussion was from an engineering perspective. They, however, reported on a series of efforts undertaken to identify a set of fundamental principles of software engineering. Brhel et al., (2015) aimed at capturing and analyzing the current state of the art in User-Centered Agile Software Development (UCASD). More specifically, the paper investigated the following research question: Which principles constitute a user-centred agile software development approach? Thus, following an approach similar to the agile manifesto, the study derived a set of grounded principles for UCASD from the literature. Valerdi and Boehm (2010) provided an insight into the development of COSYSMO, its parameter definitions, and an example estimate. Their efforts confirmed that systems engineers and program managers desire models that can be used in developing accurate estimates or systems engineering effort. Considering its contribution, COSYSMO, represented such a model. Its parametric formulation differs from others currently available which are heuristicbased. Being based on actual project data enables engineers and managers to use COSYSMO to justify forecasted expenditures and reduce risk. They had tremendous support in the research area but still have some significant challenges. Continued collaboration with systems engineering practitioners is expected so as to produce a more accurate and flexible model. Séguin et al., (2010) presented a survey and an analysis of the literature on software engineering principles. The literature survey, covering thirty years, came up with 14 different papers and books, which have proposed a total of 313 distinct principles for software engineering. A further analysis of these works was carried out based on a rigorous definition of the term ‗principle' and on the identification of a set of explicit criteria to assess whether or not any of the proposed principles qualify as fundamental principles of software engineering. The analytical approach employed filtered a set of 24 candidate software engineering principles based on certain criteria. However, the analytical approach adopted was based on a rigorous definition of terms and the lack of practical involvement to support or back up the findings has been the major setback in the work. Thus, this paper addresses this gap using a combination of pseudocodes, use cases, structure charts, and database design methodologies to further illustrate more practically some of the 24 candidate software engineering principles as screened by Séguin et al., (2014).

Methodology:-
Several authors have come up with their list of software engineering principles. There is, therefore, no attempt in this work to identify additional principles; instead, this work filters and synthesizes practically, certain principles emanating from Séguin et al., (2010) whose work came up with 24 principles screened out of 313 principles based on documented criteria. The practicality of these principles as regards their usage in producing quality and costeffective software is demonstrated using software design methodologies, notably flowcharts, pseudo-codes, data flow diagrams, entity-relationship diagrams and use case diagram for illustrative purposes.

Flow Chart:
The first design using flowchart dates back to 1945 by John Von Neumann. Unlike an algorithm, a flowchart uses different symbols to design a solution to a problem. It is a commonly used programming tool. Looking at a flowchart, it is easy to discern the operations and sequence of operations performed in a system. A flowchart is often considered as a blueprint of a design used for solving a specific problem (Parmar, 2018). By definition, a flowchart is a pictorial representation depicting the flow of steps in a program, people in an organization, or pages in a presentation. According to Shakil et al., (2013) a flowchart is a simple mapping tool that shows the sequence of actions within a process, in a form that is easy to read and communicate. Flowcharts have become a significant method used to explain different kinds of information using different types of figures. In some documents, flowcharts are used to illustrate details thus making such documents easier to understand. Certain complex problems cannot be solved directly without explaining them using flowcharts. The most significant role flowcharts play is in the design of projects. As a design tool, flowcharts help to divide a problem into smaller parts which are easier to manage (Arrish, 2014). In summary, flowcharts are an excellent way of communicating the logic of a program; it is easy and efficient to use in analysing problems; flowcharts play the role of a blueprint during program development, making the program development process easier; flowcharts make a program or system maintenance easier just the way it is easy to convert its logic to any programming language code. Parmar (2018) observed that to draw a flowchart, the following standard symbols are used:

Pseudocodes:
Writing pseudocodes before the executable source codes eases the software development process, and helps to fix certain kinds of errors before any source code is written and produces easily maintainable documentation. Studies show that pseudocodes are superior to visual approaches for routine coding in most programming environments, Pseudocodes in the context of Doersing (2015) consists of precise, natural-language descriptions of specific actions. From the perspective of algorithms, pseudocodes are understood as a way of writing down programs at a relatively low level without adhering to the specific rules of any programming language. According to Oda et al., (2015) pseudocodes are important sources of information for developing and analysing various applications. Pseudocodes are used by academicians, researchers, developers, scientists, innovators from various technology domains. Pseudocodes are an effective method, containing a finite set of instructions that produces the desired output within finite time and space complexity. In real programming environments, pseudocodes corresponding to source codes, rarely exists, because pseudocodes are not necessary once a programmer has a good grasp of the programming language and project (Oda et al., 2015). For this reason, good pseudocodes avoid code-like statements: it tries to stay at a higher level than source code to increase the efficiency during the design phase and to avoid getting restricted by specific limitations or features of the target programming language. This allows the programmer to capture the intent of an action inside a routine. Pseudocodes should describe what a design is doing, not how exactly it is going to be implemented (Doersing, 2015). However, the level of the pseudocodes should be low enough to enable the programmer to refine it into source code very quickly (almost automatically). Given below is an example of pseudocode: Keep track of the current number of resources in useIf another resource is availableAllocate a dialogue box structureIf a dialogue box structure could be allocatedNote that one more resource is in useInitialize the resourceStore the resource number at the location provided by the callerEndifReturn true if a new resource was created; else return false Data Flow Diagrams: Data Flow diagrams (DFDs) were developed in the 1970s as a method of modelling data flow when developing information systems, often used in the preliminary design stages to provide an overview of the system. Today there are several advanced modelling tools including UML which was developed by Grady Booch, Ivar Jacobson and Jim Rumbaugh at Rational Software in the 1990s and other tools that not only describe the data flow but also specify the processing steps involved. These tools (in some cases) facilitate code development as viewed by Coleman (2013). Ibrahim and Yen (2011) stated that DFDs are used because they are an approach for specifying, constructing and visualizing the model of a system graphically and are in practical use on a very wide basis. They, however, lack formal and precise specification that would have enabled them to be used to check the correctness and consistency among diagrams. Aleryani (2016) asserts that the most important diagram used when a structured approach is used for design is the Data Flow Diagram. Other diagrams like Structure chart, State machine, ER Diagrams are less useful. DFD's have certain advantages over all the others. The hierarchal structure of DFD provides different abstraction level which is very useful in system designing. As a result, Aleryani (2016) concluded that DFDs are a powerful tool used through the system analysis and design process, and can be included in the object-oriented approach. Figure 2 depicts a DFD for a Guest Book system.

ER-diagrams:
An Entity-Relationship (ER) data model is a high-level conceptual model that describes information as entities, attributes, and relationships. Entity-relationship models are used for database designs. ER models have played a central role in systems specification, analysis and development. Above all, ER models are used to control and monitor the system's databases. In ER modelling, a system's data is modelled as a set of entities, which is composed of a set of attributes, with their relationships. Btoush and Hammad (2015) stressed that entities, attributes, and relationships are the basic elements of ER models. An entity is an object that exists in the real world and it is distinguishable from other objects. An entity type is a collection of similar entities with its attributes. Entity type's attributes show detail structures of entities data and can be derived from adjectives and adverbs. Therefore, nouns in the system's requirements can be identified as entities. A relationship is an association among two or more entities. According to Gaikwad et al., (2017), Entity Relationship Diagram (ERD) is the first step in database design and hence it is an important step for the database designers, users, analyst, researchers and managers in software engineering. The Entity-Relationship Diagram (ERD) shows that the real world consists of a collection of entities, the relationships between them, and the attributes that describe them. An entity is an object about which data is stored. A relationship defines connections allowed between instances of entities. An attribute is a characteristic common to all or most instances of a particular entity. Since the ER approach is easy to understand, a designer can focus on conceptual modelling of an organization, making decisions of what to use from entity sets, relationship sets and constraints. The ER-Diagram tool provides a mechanism for quickly and easily modelling data structures required by a software system.

Use cases:
Use cases are used for documenting system requirements in terms of a system's behaviour. Use cases may also be used for communication between various participants in a software project, namely system developers, its future users as well as owners. Klimek and Szwed (2010) believe that use cases are relatively easy-to-understand, even for people not familiar with information technology jargon. Use cases enable understanding of the system though they do not align so much with the implementation details. Use cases can also describe the system requirements and, therefore, formal verification of these requirements at the initial phase of the system modelling reduces production costs throughout the whole software development cycle. Sabharwal et al., (2017) showed that functional requirements of the system under consideration are depicted through the Use Case Diagram which constitutes actors, Use Cases and their relationships as shown in Figure 3.  Six (6) out of the 24 engineering principles are subsequently examined critically using design methodologies earlier discussed. The practicality of each of the engineering principles selected is illustrated using typical software development scenarios.

Strive to have a peer, not a customer find a defect
Project deliverables undergo requirements specifications, design modeling, coding, test plans, system documentation, and production of user manuals. Despite these activities, errors still occur and it is a common experience to hear of customers complaining that bugs in programs wasted their time, gave them wrong results, or otherwise aggravated (provoked) them. It is therefore important that products are formally inspected by peers to identify faults (defects) before they cause failures most times discovered by the customers. A defect in software results from some type of mistakes. Usually, these mistakes are as a result of human error but sometimes they are caused by systemic errors in the development process. Fortunately, not every mistake leads to a defect, but almost all defects are traced back to some type of mistakes as can be seen in Table 1. Developer forgets error checking in the file copy code A corrupted file is copied, and the software crashes 4 Developer does not understand the customer scenario The software does not meet the customer's needs 5 Developer only tests the software using a fast computer and the performance seems fine The software responds too slowly from the perspective of a user with an older and slower computer There are several testing techniques that can be deployed during peer review to enable peers rather than customers detect errors in codes. The use of an output statement to echo the values of variables at critical sections of a program is one such technique that has proved very useful during peer review. Data output this way convey technical information that constitutes a nuisance to customers but can be used by peers to ascertain the correctness of programs. The bubble sort algorithm is used to illustrate this concept. Assume a list is an array of n elements. Let it be further assumed that a swap function swaps the values of the given array elements when two elements are in the wrong order. Then the following pseudocodes apply:

Begin BubleSort(list)For all elements of list If list[i] >list[i+1]Swap(list[i], list[i+1])end ifend forreturn list end BubleSort
The behaviour of this algorithm is such that comparison takes place even when the pair of elements are in their right order. This unnecessarily increases the time complexity of the algorithm to O (n 2 ) in the best-case scenario. In the case of sorting a large database, the resulting query time may be intolerable by customers. To trap this phenomenon in the pseudocode, codes can be inserted by a peer review team to establish the instances where swapping will be unnecessary such that comparison is also avoided. Based on physical observations, the problem can be eased by the use of a flag variable that enables a peer to see if any swap operation has occurred or not. If no swap has occurred in an iteration, then the array requires no more processing to be sorted and the codes break out of the loop. Based on this peer review process, the pseudocode of the Bubble Sort algorithm with swap operation can then be re-written as follows: procedure The phenomenon being trapped by peer review may not necessarily be an error but it hurts the customer in terms of turn around time.

Grow systems incrementally:
This engineering principle suggests that a complex task should be broken down into smaller and less complex tasks that can be achieved with a small-size team. The objective of this principle is to identify parts which can be developed from specification to executable codes independently. For software development projects, incremental development means dividing the requirements into suitable parts during the specification allowing for independent development of the different increments unlike the monolithic codes that are written without any form of modularistion. The design and coding of one increment is followed by testing of each increment, making it possible for code developers to start implementing the next increment while the peers validate, verify or certify the first developed increment. The incremental approach implements modularity as well as allows for a good deal of 930 parallelism between development and testing. The benefit from this parallelism is not only the possibility to work in parallel, but also that the testers start testing the software to be delivered at an early stage. In Figure 4a and b, each design illustrates in a practical way how a product can be handled incrementally during its development process. Define software artefacts rigorously: An artefact, in software development, is work that has been documented and stored in a repository so it can be retrieved upon demand. Artefact is one of many kinds of tangible by-products of the software development process. Artefacts such as use cases, class diagrams, and other Unified Modeling Language (UML) models, requirements and design documents help describe the function, architecture, and design of software. Other artefacts are concerned with the process of development itself-such as project plans, business cases, and risk assessments. The term artefact in connection with software development is largely associated with specific development methods or processes for example, the Unified Process. Following is a concrete example depicting a basic scenario to practically demonstrate aspects of the engineering principle of defining artefacts rigorously. The example is demonstrated using a class diagram. It rigorously indicates that one of the two related classes (the subclass) is considered to be a specialized form of the other (the supertype) and the superclass is considered a Generalization of the subclass. In practice, this means that any instance of the subtype is also an instance of the superclass. An exemplary tree of generalizations of this form is found in biological classification: humans are a subclass of simian, which is a subclass of mammal, and so on.

To improve design, study previous solutions to similar problems:
Often, the purpose of designing a new system is to replace an existing system. When this is the case, a lot of benefits are derivable from analyzing the existing system because the existing system represents a previous solution to the same problem. The analysis of previous solutions to the same or similar problem gives a better idea of what problems exist in the existing system. The study of a similar product is very useful if what is being done is an upgrade or improvement of an existing product or system. A proper analysis will ensure that the right parts of the system are being upgraded to address the issues uncovered from the study or analysis. This principle is practically illustrated using bubble sort and insertion sort algorithms, where the study of the bubble sort algorithm led to insertion sort algorithm to improve on the shortcomings of the bubble sort algorithm. Bubble Sort takes several passes to sort elements in an array. Every pass needs to do comparisons between elements and exchange the data if the elements are not in the right order. Both algorithms are specified in the following pseudocodes: Algorithm for Bubble sort The steps in the bubble sort can be described as: • Exchange neighbouring items until the largest item reaches the end of the array. • Repeat the above step for the rest of the array.

The Pseudocode is:
Line 1 Declare an array of integers (A), i=1,..,n Comment: sort (A) in descending order of magnitude. Line 2: for j =1 to n-1 do line 3 Line 3: for i =1 to n-1 do line 3-1

Insertion SortAlgorithm
The main idea of insertion sort is • Start by considering the first two elements of the array data. If found out of order, swap them • Consider the third element; insert it into the proper position among the first three elements. • Consider the fourth element; insert it into the proper position among the first four elements and continue until the array is sorted. The Pseudocode is: insertionSort (A, n) for i ← 2 to n j ← i while j >0 and A In the best-case scenario, data is already in the desired order. The bubble sort algorithm is unable to recognize that the data is already sorted and goes ahead to do n-1 comparisons again resulting in O (n 2 ) time complexity. The insertion sort algorithm recognizes this fact in a data set and avoids comparisons in data sets where data is already in the desired order. In this way, the insertion sort algorithm achieves a time complexity of O(n) in the best-case scenario. This means that insertion sort is an improvement over bubble sort in the best case since it recognizes when an array is already sorted as desired hence it avoids unnecessary comparisons. This is a classical demonstration of the engineering principle that design can be improved by studying previous solutions to similar problems.

Build with and for reuse:
The concept of reusability is widely used to reduce cost, effort, and time of software development. Reusability also increases productivity, maintainability, portability, and reliability of the software products, which has been evaluated before, in other software products. The problems faced in software engineering is however not the lack of reuse, but a lack of widespread, systematic reuse. Therefore, strong attention must be given to this principle. Software reusability can be considered from two viewpoints: design-by-reuse and design for-reuse. Software-by-reuse is the use of an existing application or its components to build a new application. Software-for-reuse is the ability of building applications all of which or part of which can be used in all or part of other applications(Al-Badareen et al., 2011). Figure 6 and 7 shows the Data Flow Diagram (DFD) of a framework of build-by-reuse and build-for-reuse process respectively.

Use better and fewer people:
This engineering principle of using better and fewer people is anchored on an effective staff performance appraisal that identifies the performers from among the staff who are tricksters. This principle has the advantage of reducing the growing army of hardworking and dedicated staff in organizations who are disgruntled due to unfairness in the current performance appraisal system that accommodates tricksters who do virtually nothing to contribute to productivity but are adept in influencing remunerations and promotions to their advantage (Long et al., 2013). Automated staff performance appraisal practically helps in combating this phenomenon by identifying staff with the following tendencies: Unauthorized absence from work and truancy on flimsy excuses; Engagement in extraneous activities completely unrelated to their job descriptions while at work; Unauthorized leave and proceeding on leave without a substitute relief staff; Lateness to and laziness at work; Insubordination, making the inherent control mechanisms in the hierarchical organizational structure of the organization ineffective among others. An effective appraisal system exposes weak staff while identifying staff who are good at their job. The weak ones whose productivity is questionable can be relieved of their appointments. In this way, the workforce is reduced to the barest minimum while not sacrificing effectiveness. This helps reduce the workforce to the few staff who are good and self-motivated to put their best into the job thus realizing the engineering principle of using better and fewer people.
A simple design of an appraisal system that achieves this engineering principle is demonstrated using use-case diagrams in Figure 8. The actors in the use case are supervisors who give periodic targets to their subordinate staff. The staff responded by accepting the targets and subsequently making entries for work done during the appraisal period. Supervisors view the work done and score each target. The scores are aggregated and computed in terms of a percentage score for the month.

Conclusion and Suggestion for Further Work:-
This paper has explored the practicality of certain Engineering Principles as applicable to Software Engineering. To achieve this, a systematic review of existing literature in software engineering was conducted. Our findings are that: Séguin et al., (2010) exposed 24 engineering principles screened out of 313 based on their literature review and screening criteria. Our study, adopted the use of pseudo-codes, data flow diagrams, ER-diagrams, flowcharts and Use Case diagrams as methods for the practical illustration of the six (6) engineering principles identified out of the 24 software engineering principles. Despite our efforts and success in this study, there is the need to continue to collaborate with software and system engineering practitioners to identify areas where the rest of the engineering principles can be practically illustrated. It is the desire of the authors to continue reporting the status and progress of their research efforts in this direction with the intent to contribute to knowledge in the field of software engineering.