Configurability
Systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.
Configuration management is the practice of handling changes systematically so that a system maintains its integrity over time. Configuration management implements the policies, procedures, techniques, and tools that manage, evaluate proposed changes, track the status of changes, and maintain an inventory of system and support documents as the system changes. Configuration management programs and plans provide technical and administrative direction to the development and implementation of the procedures, functions, services, tools, processes, and resources required to successfully develop and support a complex system.
Deployability
Deployability is the degree of ease with which software can be taken from the development to the production environment.
It is more of a function of the technical environment, module structures, and programming runtime/languages used in building a system, and has nothing to do with the actual logic or code of the system. The following are some factors that determine deployability: Module structures: If your system has its code organized into well-defined modules/projects which compartmentalize the system into easily deployable subunits, the deployment is much easier. Development ecosystem support: Having a mature tool-chain support for your system runtime, which allows configurations such as dependencies to be automatically established and satisfied, increases deployability. Standardized configuration: It is a good idea to keep your configuration structures (files, database tables, and others) the same for both developer and production environments. Standardized infrastructure: It is a well-known fact that keeping your deployments to a homogeneous or standardized set of infrastructure greatly aids deployability.
Elasticity
The degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible.
Elasticity is a defining characteristic that differentiates cloud computing from previously proposed computing paradigms, such as grid computing. The dynamic adaptation of capacity, e.g., by altering the use of computing resources, to meet a varying workload is called "elastic computing".
Evolvability
Evolvability is the property of systems that can easily be updated to fulfill new requirements; software that is evolvable will cost less to maintain.
Software evolvability is a multifaceted quality attribute that describes a software system's ability to easily accommodate future changes. It is a fundamental characteristic for the efficient implementation of strategic decisions, and the increasing economic value of software. For long life systems, there is a need to address evolvability explicitly during the entire software lifecycle in order to prolong the productive lifetime of software systems.
Extensibility
Extensibility is a software engineering and systems design principle that provides for future growth.
Extensibility is a measure of the ability to extend a system and the level of effort required to implement the extension. Extensions can be through the addition of new functionality or through modification of existing functionality. The principle provides for enhancements without impairing existing system functions. An extensible system is one whose internal structure and dataflow are minimally or not affected by new or modified functionality, for example recompiling or changing the original source code might be unnecessary when changing a system’s behavior, either by the creator or other programmers.
Fault Tolerance
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more faults within some of its components.
A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level, rather than failing completely, when some part of the system fails. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Within the scope of an individual system, fault tolerance can be achieved by anticipating exceptional conditions and building the system to cope with them, and, in general, aiming for self-stabilization so that the system converges towards an error-free state. However, if the consequences of a system failure are catastrophic, or the cost of making it sufficiently reliable is very high, a better solution may be to use some form of duplication.
Feasibility
For a software project to be feasible, it must be possible to complete the work in the time and/or budget dictated. For this reason feasibility touches on a number of other QARs, including time-to-market, total cost of ownership, technical knowledge, and migration requirements.
There are a number of ways to assess and safeguard the feasibility of your software engineering project, for example using pre-built plugins, off-the-shelf solutions, managed services and cloud-native functions where appropriate. Ultimately though, meeting this non-functional requirement depends on close collaboration with your development partner, who can advise you of a suitable architecture to meet your various needs.
Integrability
Process of bringing together the component sub-systems into one system and ensuring that the subsystems function together as a system.
The integration process can also be considered an aggregation of subsystems cooperating so that the system is able to deliver the overarching functionality. System integration involves integrating existing, often disparate systems in such a way that focuses on increasing value to the customer while at the same time providing value to the company, e.g., reducing operational costs and improving response time.
Interoperability
Interoperability is a characteristic of a product or system to work with other products or systems.
It can also be defined as an ability for computer software to communicate with one another for the effective exchange and process of information. The purpose of interoperability is to make it so that different systems are able to “talk” and “understand” the information they pass to one another. Similar to the automation of processes inside organizations, the automation of cross-organizational business processes is an important trend. In this endeavor, collaborating organizations rather strive for a loose coupling of their information systems instead of a tight integration: the collaborating information systems should be able to work together but retain as much independency as possible.
Portability
Portability in high-level computer programming is the usability of the same software in different environments.
The prerequirement for portability is the generalized abstraction between the application logic and system interfaces. When software with the same functionality is produced for several computing platforms, portability is the key issue for development cost reduction. Software portability may involve: Transferring installed program files to another computer of basically the same architecture. Reinstalling a program from distribution files on another computer of basically the same architecture. Building executable programs for different platforms from source code; this is what is usually understood by "porting".
Scalability
Scalability is the property of a system to handle a growing amount of work by adding resources to the system.
Scalability means that the system must be able to accommodate larger volumes (whether of users, throughput, data) over time, and also includes NFRs such as elasticity, which is the ability to scale up and down quickly, as needed. An example is a search engine, which must support increasing numbers of users, and the number of topics it indexes. Today, scalability can be achieved more easily than in the past thanks to modern cloud-based solutions, which have the infrastructure needed to auto-scale according to requirements.
Simplicity
Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better. - Edsger W. Dijkstra (1984)
Unfortunately, there isn’t one simple definition. However, there are number of concepts that can be applied to drive toward architectural simplicity. Strive for the Simplest Option - o attain simplicity, you must first understand that there are multiple possible designs, then consider selecting the most concise option that satisfies the function and non-functional needs. Apply YAGNI – You Aint’t Gonna Need It is an often-contentious topic that argues for designing and building for what is needed now, not for what you think you may need in the future. Practice Parsimony – Parsimony speaks to an extreme unwillingness to spend money or use resources. Avoid Premature Optimization – Avoid applying too much abstraction to your design at the beginning.
Testability
Degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context.
Testability, a property applying to empirical hypothesis, involves two components. The effort and effectiveness of software tests depends on numerous factors including: Properties of the software requirements Properties of the software itself (such as size, complexity and testability) Properties of the test methods used Properties of the development- and testing processes Qualification and motivation of the persons involved in the test process
Workflow
The process of dividing software development work into smaller, parallel or sequential steps or subprocesses to improve design, product management.
In it's core a workflow is a sequence of tasks that processes a set of data. Anytime data is passed between humans and/or systems, a workflow is created. Workflows are the paths that describe how something goes from being undone to done, or raw to processed.