Minggu, 17 Juni 2018

Sponsored Links

remarkable new software application program | things to wear 2 ...
src: s-media-cache-ak0.pinimg.com

Software testing is an investigation conducted to inform stakeholders about the quality of the tested software product or service. Software testing can also provide an objective and independent view of the software to enable businesses to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application in order to find software bugs (errors or other defects), and verify that the software product is suitable for use.

Software testing involves the implementation of software components or system components to evaluate one or more interesting properties. In general, this property indicates the extent to which the component or system is tested

  • meet the requirements that guide their design and development,
  • responds correctly to all input types,
  • performs its function within acceptable time,
  • useful enough,
  • can be installed and run in the intended environment, and
  • achieve the general results desired by its stakeholders.

Because the number of possible tests for even simple software components is practically unlimited, all software testing uses several strategies to select a feasible test for the time and resources available. As a result, software testing typically (but not exclusively) attempts to run a program or application in order to find software bugs (errors or other defects). A test job is a recurring process like when a bug is fixed, it can light up another bug, deeper, or even create a new one.

Software testing can provide objective and independent information about software quality and risk of failure to users or sponsors.

Software testing can be done as soon as the executable software (even if partially completed) exists. The overall approach to software development often determines when and how testing is done. For example, in a gradual process, most of the testing occurs after system requirements have been established and then implemented in a testable program. Conversely, under agile approaches, requirements, programming, and testing are often done simultaneously.


Video Software testing



Overview

Although testing may determine the software's truth under the assumption of certain hypotheses (see hierarchy of testing difficulties below), testing can not identify all defects in the software. On the contrary, it provides a comparative criticism that compares the circumstances and behavior of the product to the test - principle or mechanism by which a person can recognize a problem. These oracle may include (but not limited to) specifications, contracts, comparable products, previous versions of the same product, conclusions about intended or expected goals, user or customer expectations, relevant standards, applicable law, or criteria others.

The main purpose of testing is to detect software failures so that defects can be found and repaired. Testing can not establish that a product is functioning correctly under all conditions, but that it does not work properly under certain conditions. The scope of software testing often involves checking the code as well as execution of the code in various environments and conditions as well as checking the code aspect: does it do what it is supposed to do and do what needs to be done. In today's software development culture, testing organizations may be separate from the development team. There are various roles to test team members. Information obtained from software testing can be used to improve the process by which software is developed.

Every software product has a target audience. For example, an audience for video game software is completely different from banking software. Therefore, when an organization develops or invests in a software product, it can assess whether the software product will be acceptable to end users, target audiences, buyers and other stakeholders. Software testing helps the experimental process to make this assessment.

Damage and failure

Not all software defects are caused by encoding errors. One common source of costly defects is the need gap, for example, an unrecognized requirement that causes errors of omission by the program designer. The need gap can often be a non-functional requirement such as testability, scalability, maintenance, usability, performance, and security.

Software errors occur through the following processes. A programmer makes mistakes, which results in defects (errors, bugs) in the software source code. If this defect is run, in certain circumstances the system will produce incorrect results, causing failure. Not all defects will result in failure. For example, defects in dead code will never result in failure. A defect may turn into a failure when the environment changes. Examples of these changes in the environment include software that runs on new computer hardware platforms, changes in data sources, or interacts with different software. One defect can cause various symptoms of failure.

Combination of inputs and preconditions

The fundamental problem with software testing is that testing under all input combinations and preconditions (initial state) is not feasible, even with simple products. This means that the number of defects in software products can be very large and rare defects are hard to find in testing. More importantly, non-functional quality dimensions (how should be versus what it should do ) - usability, scalability, performance, compatibility, reliability - can be very subjective; something that is of sufficient value to one person may not be tolerated by others.

Software developers can not test everything, but they can use a combinatorial testing design to identify the minimum number of tests needed to get the coverage they want. Combinatorial testing design allows users to obtain greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use a combination of combinatorial design methods to build structured variations into their test cases.

Economy

A study conducted by NIST in 2002 reported that software bugs cost the US $ 59.5 billion per year. More than a third of this cost can be avoided, if better software testing is done.

Outsourcing software testing because the cost is very common, with China, Philippines, and India being the preferred destination.

Role

Software testing can be done by specialized software testers. Until the 1980s, the term "software tester" was used in general, but was later also seen as a separate profession. Regarding different periods and objectives in software testing, different roles have been established, such as test manager , test leader , test analyst , > test designer , testers , automation developers , and test administrator . Software testing may also be performed by unspecified software testers.

Maps Software testing



History

The debugging separation from the test was originally introduced by Glenford J. Myers in 1979. Although his attention was on damage testing ("The successful test case is one that detects an undiscovered error.") It illustrates the desire of the software. engineering community to separate fundamental development activities, such as debugging, from verification.

Tester â€
src: www.rogeriodasilva.com


Test method

Static vs. dynamic testing

There are many approaches available in software testing. Reviews, searches, or inspections are called static tests, while actually executing code programmed with a specific set of test cases is called dynamic testing. Static testing is often implicit, as proofreading, plus when programming tools/text editors check the source code or compiler structure (pre-compilers) check the syntax and data flow as static program analysis. Dynamic testing occurs when the program itself is executed. Dynamic testing can begin before the program completes 100% to test certain parts of the code and is applied to different functions or modules. The general techniques for this either use stubs/drivers or execution of the debugger environment.

Static testing involves verification, whereas dynamic testing also involves validation. Together they help improve software quality. Among the techniques for static analysis, mutation testing can be used to ensure that test cases will detect errors introduced by editing source code.

Approach box

Software testing methods have traditionally been divided into white and black box testing. Both of these approaches are used to describe the viewpoint taken by the tester when designing a test case.

White box testing

White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by looking at the source code) examines the internal structure or workings of a program, compared to functions exposed to the end user. In white-box testing, an internal perspective of the system, as well as programming skills, is used to design test cases. The tester selects the input to train the path through the code and determine the appropriate output. This is analogous to testing nodes in circuits, for example testing in circuits (ICT).

While white-box testing can be applied to the unit, integration and system level of the software testing process, it is usually done at the unit level. It can test paths in units, inter-unit paths during integration, and between subsystems during system-level testing. Although this test design method can uncover many errors or problems, it may not detect an unapplied part of the specification or missing requirements.

The techniques used in white-box testing include:

  • Testing APIs - testing apps using public and private APIs (application programming interfaces)
  • Coverage of code - make tests to meet multiple code coverage criteria (e.g., test designers can create tests to cause all statements in the program to run at least once)
  • Error injection method - deliberately introduces errors to measure the efficacy of test strategies
  • Mutation test method
  • Static test methods

The code coverage tool can evaluate the completeness of the test suite created by any method, including black-box testing. This allows the software team to examine parts of the system that are rarely tested and ensure that the most important function points have been tested. The code coverage as a software metric can be reported as a percentage for:

  • Function scope , which reports the executed function
  • Statement coverage , which reports the number of executed rows to complete the test
  • Decision scope , which reports whether the True and False branches of the given tests have been executed

Scope of 100% statement ensures that all code or branch paths (in terms of control flow) are run at least once. This helps in ensuring the correct function, but not enough because the same code can process different inputs correctly or incorrectly.

Black box testing

Black-box testing treats software as a "black box", checking functionality without the knowledge of internal implementation, regardless of source code. Testers only know what the software should do, not how the software does it. Black-box testing methods include: equivalent partition, boundary value analysis, all-pair testing, country transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.

Specification-based testing aims to test software functionality in accordance with applicable requirements. This level of testing usually requires complete due diligence to be given to testers, which then can only verify that for the given input, the output value (or behavior), either "is" or "not" equal to the expected value specified in the test. The test case is based on the specifications and requirements, that is, what the application should do. It uses an external description of the software, including specifications, requirements, and design to lower the test case. These tests may work or not, although they are usually functional.

Specification-based testing may be necessary to ensure correct functionality, but not enough to guard against complex or high-risk situations.

One advantage of the black box technique is that no programming knowledge is required. Whatever bias the programmer has, the testers may have different devices and can emphasize different areas of functionality. On the other hand, black-box testing has been said "like walking in a dark labyrinth without a flashlight." Since they do not check the source code, there is a situation where the tester writes many test cases to check something that can be tested with just one test case or leave some part of the untested program.

This testing method can be applied to all levels of software testing: unit, integration, system, and acceptance. This usually consists of most or all of the tests at a higher level, but can also dominate unit testing.

Visual test

The purpose of visual testing is to give developers the ability to check what happens at the point of failure of the software by presenting the data in such a way that the developer can easily find the information he needs, and the information is clearly stated..

The essence of visual testing is the idea that shows a person's problem (or test failure), rather than just describing it, greatly improving clarity and understanding. Visual testing, therefore, requires the recording of the entire testing process - capturing everything that happens to the test system in a video format. The output video comes with real-time input tester via a picture-in-picture webcam and audio commentary from the microphone.

Visual testing provides a number of advantages. The quality of communication increases drastically because testers can point out the problem (and events leading to it) to the developers as opposed to just describing it and the need to replicate the test failures will be absent in most cases. The developer will have all the evidence he needs from a test failure and can instead focus on the cause of the error and how it should be fixed.

Visual testing is perfect for environments that use agile methods in their software development because agile methods require better communication between testers and developers and collaboration in small teams.

Ad hoc testing and exploratory testing are important methodologies for checking software integrity, since they require less preparation time to be applied, while important bugs can be found quickly. In ad-hoc testing, where testing takes place in an improvised and spontaneous way, the ability of the test tool to visually record everything that happens on the system becomes very important to document the steps taken to uncover the bug.

Visual testing garnered recognition in customer acceptance and usability testing, since tests can be used by many individuals involved in the development process. For customers, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on the screen, as well as their sound and images, to provide a complete picture when software failures for developers.

Gray-box Testing

Gray box testing (American spelling: gray box testing) involves knowledge of internal data structures and algorithms for the purpose of designing tests when executing such tests on users, or black-box levels. Testers are not required to have full access to the source code of the software. Manipulating data inputs and formatting outputs do not qualify as gray-boxes, since inputs and outputs are clearly outside the "black box" we call the system being tested. This difference is particularly important when performing integration testing between two code modules written by two different developers, in which only the interface is exposed for testing.

However, tests that require modification of back-end data storage such as databases or log files do not qualify as gray boxes, as users will not normally be able to change the data repository in normal production operations. Gray-box testing may also include reverse engineering to determine, for example, boundary values ​​or error messages.

By knowing the underlying concepts of how software works, the tester makes better testing choices when testing software from the outside. Typically, gray box tester will be allowed to set up isolated testing environment with activities like hatchery database. Testers can observe the state of a tested product after performing certain actions such as executing SQL statements against a database and then executing a query to ensure that the expected changes have been reflected. Gray-box testing implements intelligent testing scenarios, based on limited information. This will mainly apply to data type handling, exception handling, and so on.

Software Testing Classes in Pune
src: etlhive.com


Test level

In general there are four recognized testing levels: unit testing, integration testing, component interface testing, and system testing. Tests are often grouped according to where they are added in the software development process, or based on test specificity levels. The main levels during the development process as defined by the SWEBOK manual are unit-testing, integration-, and systems differentiated by test targets without implying a specific process model. Other test levels are classified based on test objectives.

There are two different test levels from a customer perspective: low level testing (LLT) and high-level testing (HLT). LLT is a group of tests for different levels of software or product application components. HLT is a group of tests for the entire application or software product.

Unit test

Unit testing refers to a test that verifies the functionality of a particular piece of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and minimum unit tests include constructors and destructor.

This type of test is usually written by the developer as they work on the code (white box style), to ensure that the specific functionality works as expected. One function may have multiple tests, to capture corner cases or other branches in the code. Unit testing alone can not verify the functionality of a software; it is used to ensure that building blocks of software work independently of each other.

Unit testing is a software development process that involves synchronization applications from a broad spectrum of flaw prevention and detection strategies to reduce software development risk, time, and cost. This is done by software developers or engineers during the construction phase of software development life cycle. Unit testing aims to eliminate construction errors before the code is promoted to additional testing; this strategy is intended to improve the quality of the resulting software as well as the efficiency of the overall development process.

Depending on the organization's expectations for software development, unit testing may include static code analysis, data flow analysis, metric analysis, peer code review, code coverage analysis, and other software testing practices.

Integration test

Integration testing is any type of software testing that seeks to verify interfaces between components against software design. Software components can be integrated iteratively or together ("big bang"). Usually the former is considered a better practice because it allows the interface problem to be placed faster and fixed.

Integration testing serves to expose defects in interfaces and interactions between integrated components (modules). A larger software component group that is tested more closely related to architectural design elements is integrated and tested until the software functions as a system.

Testing component interface

Component interface testing practices can be used to check the handling of data passed between various units, or subsystem components, outside of full integration testing between the units. The forwarded data can be considered as a "message packet" and the range or type of data can be checked, for data generated from one unit, and tested for validity before being forwarded to another unit. One option for testing the interface is to keep log files separate from the data items skipped, often with a timestamp to allow analysis of thousands of cases of data transmitted between units over several days or weeks. The test may include checking the handling of some extreme data values ​​while other interface variables are passed as normal values. Unusual data values ​​in the interface can help explain unexpected performance in the next unit. The component interface testing is a black-box testing variation, focusing on data values ​​beyond only the corresponding actions of the subsystem component.

System test

System testing tests fully integrated systems to verify that the system meets its requirements. For example, system testing may involve testing logon interfaces, then creating and editing entries, plus sending or printing results, followed by processing a summary or deletion (or archiving) of entries, then logoff.

Testing of operation acceptance

Operational acceptance is used to perform operational readiness (pre-release) of a product, service or system as part of the quality management system. OAT is a common type of non-functional software testing, used primarily in software development and software maintenance projects. This type of testing focuses on the operational readiness of supported systems, or to be part of the production environment. Therefore, it is also known as the operation readiness test (ORT) or Readiness and Assurance Operation (OR & amp; A) testing. Functional testing in OAT is limited to the tests required to verify the non-functional aspects of the system.

In addition, software testing should ensure that system portability, as well as working as expected, does not damage or damage some of the operating environment or cause other processes in the environment to become invalid.

QA / Software testing | Roicians
src: www.roicians.com


Type of testing, technique and tactics

Different labels and modes of test grouping may be a type of testing, tactic or software testing technique.

Testing installation

Most software systems have the necessary installation procedures before they can be used for their primary purpose. Testing this procedure to achieve a preinstalled software system that can be used is known as an installation test.

Compatibility test

A common cause of software failure (real or perceived) is the lack of compatibility with other application software, operating systems (or operating system versions, old or new), or target environments very different from the original (such as terminals or GUI Applications intended to running on the desktop is now required to be a Web application, which should be rendered in the Web browser). For example, in the case of lack of backward compatibility, this can happen because the programmer develops and tests the software only on the latest version of the target environment, which not all users may run. This results in undesirable consequences that the most recent work may not work in previous versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such problems can be corrected by proactively abstracting the operating system functions into separate program modules or libraries.

Smoke and sanity test

The assurance test determines whether it makes sense to continue further testing.

Smoke testing consists of minimal effort to operate the software, designed to determine if there is a basic problem that will prevent it from working at all. Such tests can be used as a verification test.

Regression testing

Regression testing focuses on finding defects after major code changes occur. Specifically, it seeks to uncover software regression, as a feature degraded or lost, including old bugs that have returned. Such setbacks occur each time a software function previously functioned correctly, stops working as intended. Usually, regressions occur as an undesirable consequence of program changes, when newly developed sections of software collide with existing code. Common methods of regression testing include rerunning previous test cases and checking whether previous fixed errors have reappeared. Depth of testing depends on the phase in the release process and the risk of additional features. They can be complete, for added changes at the end of the release or considered risky, or very shallow, consisting of a positive test on each feature, if the change is early in the release or considered low-risk. Regression testing is usually the largest test attempt in commercial software development, as it examines many details in previous software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure previous functionality is still supported.

Acceptance test

Acceptance testing can mean one of two things:

  1. The smoke test is used as a building acceptance test before further testing, for example, before integration or regression.
  2. Customer acceptance tests, often in their lab environment on their own hardware, are known as user acceptance tests (UAT). Acceptance testing can be performed as part of a hand-off process between any two development phases.

Alpha testing

Alpha testing is a simulation or actual operational test by a potential user/customer or an independent test team on the developer's site. Alpha testing is often used for off-the-shelf software as a form of internal acceptance testing before software is used for beta testing.

Beta test

Beta testing comes after alpha testing and can be considered as a form of external user acceptance testing. The software version, known as the beta, was released to a limited audience outside of a programming team known as a beta tester. The software is released to a group of people so further testing can ensure the product has some errors or bugs. The beta version can be made publicly available to improve the feedback field to the maximum number of future users and to provide an earlier, for an extended or even unlimited period of time (eternal beta).

Functional vs non-functional testing

Functional testing refers to activities that verify a particular action or function of the code. This is usually found in documentation of code requirements, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question "can the user do this" or "does this particular feature work?"

Non-functional testing refers to aspects of software that may not be related to a particular function or user action, such as scalability or other performance, behavior under certain restrictions, or security. The test will determine the breaking point, the point at which the extremes of scalability or performance lead to unstable execution. Non-functional requirements tend to be those that reflect product quality, especially in the context of user conformity perspectives.

Ongoing test

Ongoing testing is the process of performing automated testing as part of a software delivery channel to obtain immediate feedback on business risks associated with software release candidates. Continuous testing includes validation of functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assess system requirements related to overall business objectives.

Destructive testing

Destructive tests try to cause the software or the sub system to fail. It verifies that the software works properly even when receiving invalid or unexpected inputs, thus forming the robust input validation and routine error management. The software error injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software error injection page; there is also a lot of open source software and free software available that do destructive testing.

Software performance test

Performance testing is generally done to determine how the system or sub-system works in terms of responsiveness and stability under certain workloads. It can also serve to investigate, measure, validate or verify other system quality attributes, such as scalability, reliability, and resource use.

Load testing mainly relates to testing that the system can continue to operate under a certain load, whether it is large amounts of data or a large number of users. This is usually referred to as software scalability. Activity related load testing when performed as a non-functional activity is often referred to as endurance testing . Volume Testing is a way to test software functions even when certain components (eg files or databases) increase radically in size. Stress testing is a way to test reliability under unexpected or scarce workloads. (often referred to as load or endurance tests) checks to see if the software can continue to function properly within or above an acceptable period.

There is little agreement on what specific goals of performance testing are. Load loading terms, performance testing, scalability testing, and volume testing are often used interchangeably.

Real-time software systems have tight time limits. To test whether time constraints are met, real-time testing is used.

Usability test

The usability test is to check if the user interface is easy to use and understand. This is mainly related to the use of the application.

Accessibility Testing

Accessibility testing may include compliance with standards such as:

  • America with Disabilities Act of 1990
  • Section 508 Amendment to the 1973 Rehabilitation Act
  • The Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)

Security test

Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.

The International Organization for Standardization (ISO) defines this as "the type of testing undertaken to evaluate the extent to which a test item, and related data and information, is protected so that unauthorized persons or systems can not use, read or modify it, and any person or system authorities are not denied access. "

Internationalization and localization

Testing for internationalization and localization validates that the software can be used with different languages ​​and geographic areas. The pseudolocalization process is used to test the ability of applications to be translated into other languages, and makes it easier to identify when the localization process can introduce new bugs into the product.

The globalization test verifies that the software is customized for new cultures (such as different currencies or time zones).

The actual translation to human language must be tested as well. Possible localization and failure of globalization include:

  • Software is often localized by translating a list of strings out of context, and the translator can select the wrong translation for ambiguous source strings.
  • Technical terminology can be inconsistent, if the project is translated by several people without proper coordination or if the translator is unwise.
  • A literal word-for-word translation may sound inappropriate, artificial, or too technical in the target language.
  • Messages that are not archived in the original language can be left hard coded in the source code.
  • Multiple messages can be generated automatically at run time and the resulting string may not work, is either functionally, misleading, or confusing.
  • The software can use keyboard shortcuts that have no functionality in the source language keyboard layout, but are used to type characters in the layout of the target language.
  • Software may be less supported for target language character encoding.
  • The appropriate Fonts and font sizes in the source language may not be appropriate in the target language; for example, CJK characters may become unreadable, if the font is too small.
  • Strings in the target language may be longer than the software can handle. This may make the string partially invisible to the user or cause the software to crash or malfunction.
  • The software may not have sufficient support to read or write two-way text.
  • The software can display images with non-localized text.
  • Localized operating systems may have different system configuration files named and different environment variables and formats for dates and currencies.

Development test

Development Testing is a software development process that involves synchronization applications from a broad spectrum of flaw prevention and detection strategies to reduce software development risk, time, and cost. This is done by software developers or engineers during the construction phase of software development life cycle. Development Tests aim to eliminate construction errors before the code is promoted to other tests; this strategy is intended to improve the quality of the resulting software as well as the efficiency of the overall development process.

Depending on the organization's expectations for software development, Development Tests may include static code analysis, data flow analysis, metric analysis, peer code review, unit testing, code coverage analysis, tracing, and other software testing practices.

A/B Test

A/B testing is a method of running controlled experiments to determine whether the proposed changes are more effective than the current approach. The customer is directed to the current version (control) feature, or to the modified version (maintenance) and data collected to determine which version is better in achieving the desired result.

Concurrent testing

In concurrent testing, the focus is on performance while continuing to run with normal input and under normal operational conditions, compared to stress testing, or fuzz testing. Memory leaks, as well as basic errors are easier to find with this method.

Conformity testing or type test

In software testing, conformance testing verifies that a product is performing according to the specified standard. Compilers, for example, are extensively tested to determine if they meet the recognized standards for that language.

softwareTesting on FeedYeti.com
src: www.datamindtechnologies.com


Test process

The traditional waterfall development model

A common practice in waterfall development is testing conducted by an independent group of examiners. This can happen:

  • after the functionality is developed, but before it is sent to the customer. This practice often results in a testing phase used as a project buffer to compensate for project delays, thus sacrificing time devoted to testing.
  • at the same time as the development project begins, as the process continues until the project is completed.

However, even in the waterfall development model, unit testing is often done by the software development team even when further testing is done by a separate team.

Agile or XP development model

In contrast, some emerging software disciplines such as extreme programming and agile software development movement, follow the "test-driven software development model". In this process, unit tests are written first, by software engineers (often with programming pairs in extreme programming methodologies). The test is expected to fail at first. Each test fails followed by writing enough code to make it pass. This means the test circuit is constantly updated because of new failure conditions and angular cases are found, and they are integrated with the developed regression test. Unit tests are maintained along with the rest of the software source code and are generally integrated into the development process (with interactive testing being downgraded to a partial manual build process).

The ultimate goal of this testing process is to support continuous integration and to reduce defect rates.

This methodology enhances the testing efforts undertaken by the development, before reaching a formal testing team. In some other development models, most of the test execution occurs after the requirements have been established and the coding process has been completed.

Software Testing Interview Questions and Answers For Freshers Part-1
src: theblogreaders.com


Sample test cycle

Although variations exist between organizations, there is a typical cycle for testing. The example below is common among organizations that use the Waterfall development model. The same practice is commonly found in other development models, but may not be as clear or explicit.

  • Needs analysis: Testing should start at the software development lifecycle requirement phase. During the design phase, testers work to determine what aspects of the design can be tested and with what parameters the test works.
  • Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during the testing, a plan is required.
  • Development of testing: Test procedures, test scenarios, test cases, test data sets, test scripts for use in test software.
  • Test execution: Executor executes software based on plan and test documents then reports errors found to development team. This section can be tricky when running a test with a lack of programming knowledge.
  • Test reporting: After testing is complete, testers generate metrics and create final reports about their test efforts and whether the software tested is ready for release.
  • Analysis of test results: Or Disability Analysis, carried out by the development team usually together with the client, to decide what defects should be assigned, repaired, rejected (ie software found to function correctly) or suspended for later handling..
  • Disabled Preparing: Once the defect has been handled by the development team, it is retested by the testing team.
  • Regression testing: It is common to have a small test program built out of the test subset, for each new software integration, modified, or fixed, to ensure that the latest delivery is not damaging to anything and that the software of the product as a whole still works with correct.
  • Closure Test: After testing meets the exit criteria, activities such as capturing key outputs, learning, results, logs, project-related documents are archived and used as a reference for future projects.
  • Automatic test

    Many programming groups rely more on automated testing, especially groups that use test-based development. There are many frameworks for writing tests, and continuous integration software will run the test automatically every time the code is checked into the version control system.

    While automation can not reproduce everything a human can do (and all the way they think to do it), it can be very useful for regression testing. However, it requires a well-developed test script to test scripts to be really useful.

    Test tool

    Program testing and error detection can be significantly assisted by testing tools and debuggers. The testing/debug tool includes features like:

    • The program monitor, which allows full or partial monitoring of the program code including:
      • The instruction set simulator, enables full track monitoring and tracking facilities
      • Hypervisor, allowing full control over program code execution including: -
      • Program animation, enables step-by-step execution and conditional separation at source level or in machine code
      • Code coverage report
    • Symbols dump or debugging formats, tools that allow checking of program variables on errors or at selected points
    • Automated Graphical User Interface (GUI) testing tools used to repeat system level testing via GUI
    • Benchmark, allowing comparison of run-time performance created
    • Performance analysis (or profile tool) that can help highlight hot spots and resource usage

    Some of these features can be incorporated into a single composite tool or Integrated Development Environment (IDE).

    Software Testing - TAAS - ExpMind
    src: expmind.co.in


    Measurement in software testing

    Quality measures cover topics such as accuracy, completeness, safety and ISO/IEC 9126 requirements such as capability, reliability, efficiency, portability, maintenance, compatibility, and usability.

    There are a number of frequently used software metrics, or sizes, used to help determine the state of the software or the adequacy of the test.

    Hierarchy of difficulty testing

    Based on the number of test cases required to make the complete test series in each context (ie a series of tests in such a way that, if applied to the tested application, we collect enough information to determine exactly whether the system is true or false according to some specifications), hierarchy difficulty testing has been submitted. This includes the following testability classes:

    • Class I: there is a complete set of complete tests.
    • Class II: any degree of partial differentiation (ie, incomplete ability to distinguish the correct system from the wrong system) can be reached with a limited test series.
    • Class III: there is a complete series of tests that can be counted.
    • Class IV: there is a complete set of tests.
    • Class V: all cases.

    It has been proven that each class is strictly included in the next. For example, testing when we assume that the behavior of the tested implementation can be denoted by a deterministic finite state machine for some unlimited sets of known inputs and outputs and with some number of countries known to belong to Class I (and all subsequent classes). However, if the number of countries is unknown, then it belongs only to all classes of Class II. If the tested execution should be a deterministic state-finite machine that fails the specification for one trace (and its continuation), and the number of countries is unknown, then it belongs only to classes of Class III. Test the temporal machine in which the transition is triggered if the input generated in some real-bounded interval belongs only to classes of Class IV, whereas testing of many non-deterministic systems belongs to Class V (but not all, and some even belong to Class I). Inclusion into Class I does not require the simplicity of the assumed computational model, as some cases of testing involving implementations written in any programming language, and the implementation of tests defined as machines dependent on sustainable quantities, have proven to be in Class I. Others elaborated cases, such as the testing framework by Matthew Hennessy under should be semantics, and a temporal machine with a rational timeout, belonging to Class II.

    5 Software Testing Trends to Keep an Eye on in 2017 -
    src: www.einfochips.com


    Testing artifact

    The software testing process can generate multiple artifacts. The actual artifacts generated are the model of the software development model used, the stakeholders and the needs of the organization.

    Test plan
    A test plan is a document that specifies a destination, a target market, an internal beta team, and a process for a specific beta test. The developers are well aware of what test plan will run and this information is made available to management and developers. The idea is to make them more careful when developing their code or making additional changes. Some companies have higher-level documents called test strategies.
    Track matrix
    Traceability matrix is ​​a table that links requirements or design documents to test documents. This is used to change the test when the associated source document is changed, to select test cases for execution when planning for regression tests taking into account the scope of the requirements.
    Test case
    A test case usually consists of a unique identifier, a reference requirement of a design specification, a precondition, an event, a series of steps (also known as action) to follow, input, output, expected results, and actual results. Clinically defined, the test case is the expected input and outcome. This can be just as brief as 'for condition x your derived product is y', although usually the test case describes in more detail what input scenario and what outcome might be expected. Sometimes it can be a series of steps (but often the steps contained in separate test procedures that can be done on some test cases, as an economic problem) but with one expected result or expected result. Optional fields are test IDs, test steps, or sequence of execution numbers, related requirements, depth, testing categories, authors, and checkboxes for whether testing can be automated and automated. Larger test cases may also contain prerequisite statuses or steps, and descriptions. A test case should also contain a place for the actual results. These steps can be stored in word processing documents, spreadsheets, databases, or other public repositories. In the database system, you may also be able to see the results of previous tests, which produce results, and what system configuration is used to produce those results. These last results will usually be stored in separate tables.
    Test script
    A test script is a programming procedure or code that replicates user actions. Originally, the term came from a work product created by an automated regression test tool. Test cases will be the basis for creating test scripts using tools or programs.
    Test suite
    The most common term for collection of test cases is the test circuit. The series of tests often also contains more detailed instructions or targets for each collection of test cases. This must contain a section where the tester identifies the system configuration used during the test. A group of test cases may also contain the status or prerequisite steps, and the following test description.
    Test equipment or test data
    In many cases, multiple sets of values ​​or data are used to test the same functionality of a particular feature. All test values ​​and environmental components that can be changed are collected in separate files and stored as test data. It is also useful to provide this data to clients and with products or projects.
    Test harness
    Software, tools, examples of data inputs and outputs, and configurations are all referred collectively as test equipment.

    Software-testing | AdeptSolutions
    src: adeptsolutionsinc.com


    Certification

    There are several certification programs to support the professional aspirations of software testers and quality assurance specialists. Note that some practitioners believe that the testing field is not ready for certification, as mentioned in the Controversy section.

    Test certification
    Associate Associate in Software Testing (CAST) offered by the International Software Certification Agency
    Certified Manager of Software Testing (CMST) offered by the International Software Certification Board
    Certified Software Tester (CSTE) offered by the International Software Certification Agency
    iSQI Certified Agile Tester (CAT) offered by International Software Quality Institute
    ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Qualification Testing Board
    The ISTQB Certified Tester, the Foundation Level (CTFL) offered by the International Software Qualification Testing Board
    Quality assurance certification
    The Software Certified Quality Association (CASQ) offered by the International Software Certification Agency
    Certified Manager of Software Quality (CMSQ) offered by the International Software Certification Board
    Certified Software Quality Analyst (CSQA) offered by the International Software Certification Agency
    Certified Software Quality Engineer (CSQE) offered by the American Society for Quality

    Career path for testing professionals | Learn Selenium Online ...
    src: learnselenium.techcanvass.com


    Controversy

    Some major software testing controversies include:

    Agile vs. traditionalÃ,
    Should examiners learn to work under conditions of uncertainty and constant change or should they aim at the "maturity" process? The agile testing movement has gained popularity since 2006 primarily among commercials, while government and military software providers use this methodology but also the latest trial models (eg, In Waterfall model).
    Manual vs. test. automatic
    Some authors believe that test automation is so expensive relative to its value that it must be used sparingly. Test automation can then be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in automation testing. In addition, investments in tools and expertise can be amortized through multiple projects by sharing the correct level of knowledge within an organization.
    Is the existence of the ISO 29119 software testing standards justified?
    Significant opposition has been established from the school ranks driven by software testing contexts about ISO 29119 standards. Professional testing associations, such as the International Society for Software Testing, have attempted to have a withdrawal standard.
    Study is used to indicate the relative cost of fixing defects
    There is a conflicting view on the application of studies used to show the relative cost of fixing defects depending on their recognition and detection. For example:

    It is generally believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of repairing defects depending on the stage it was found. For example, if the problem in the requirements is found only after the release, it will cost 10-100 times more to fix than if it has been discovered by the requirements review. With the emergence of sustainable modern deployment practices and cloud-based services, the cost of re-installation and maintenance can decrease over time.

    The data from this table is extrapolated still less. Laurent Bossavit said in his analysis:

    The "smaller project" curve turns out to be from only two teams of first year students, a very small sample size extrapolating to a "smaller project in general" is completely untenable. The GTE study did not explain the data, other than saying it came from two projects, one large and one small. The paper cited for the Bell Labs "Watchers" project specifically does not claim to have collected very fine data shown by Boehm's data points. The IBM study (Fagan paper) contains claims that seem to contradict the Boehm chart and there are no numerical results that clearly match the data points. Boehm does not even quote paper for TRW data, except when writing for "Creating Software" in 2010, and there he cites the original 1976 article. There is a major study conducted at TRW at the right time for Boehm to quote it, but the paper does not contain any kind of data that would support Boehm's claims.

    Some practitioners state that the testing field is not ready for certification
    No certification currently offered actually requires the applicant to demonstrate their ability to test the software. No certification is based on widely accepted knowledge. Certification itself can not measure one's productivity, their skills, or practical knowledge, and can not guarantee their competence, or professionalism as a tester.

    Software Testing Training in Gurgaon, Software Testing Training ...
    src: www.cromacampus.com


    Related process

    Software verification and validation

    Software testing used in conjunction with verification and validation:

    • Verify: Have we built the software right? (ie, does it apply requirements).
    • Validation: Have we built the correct software? (that is, whether the shipment meets the customer).

    The terms of verification and validation are generally used interchangeably in the industry; it is also common to see these two terms defined by contradictory definitions. According to IEEE Standards Software Engineering Glossary:

    Verification is the process of evaluating a system or component to determine whether a particular product development phase meets the requirements imposed at the beginning of that phase.
    Validation is the process of evaluating the system or component during or at the end of the development process to determine whether it meets the specified requirements.

    And, in accordance with ISO 9000 standards:

    Verification is confirmation by examination and through the provision of objective evidence that the specified requirements have been met.
    Validation is confirmation through examination and through the provision of objective evidence that the requirements for the intended use or application have been met.

    This contradiction is caused by the use of the concepts of terms and conditions specified but with different meanings.

    In the case of the IEEE standard, the specified requirements, specified in the validation definition, are the set of problems, needs and desires of the stakeholders to be solved and fulfilled by the software. These requirements are documented in the Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of each phase of the software development process. These products are, in fact, specifications such as Architectural Design Specifications, Detailed Design Specifications, etc. SRS is also a specification, but can not be verified (at least not in the sense used here, more on this below).

    However, for ISO 9000, the specified requirement is a set of specifications, as mentioned above, which must be verified. The specification, as described earlier, is a product development phase of software that accepts other specifications as inputs. A specification is successfully verified when properly implementing its input specifications. All specifications can be verified except SRS as it is the first (can be validated, though). Example: Design Specification must implement SRS; and, the construction stage artifacts shall apply the Design Specification.

    Thus, when these words are defined in general terms, the apparent contradictions disappear.

    Both SRS and software must be validated. SRT can be validated statically in consultation with stakeholders. However, running some partial implementations of any software or prototype (dynamic testing) and obtaining positive feedback from them, can further enhance the certainty that the SRT is formulated correctly. On the other hand, software, as the end product and running (not the artifacts and documents, including the source code) should be dynamically validated with the stakeholders by executing the software and asking them to try it.

    Some may argue that, for SRS, inputs are the words of the stakeholders and, therefore, the SRS validation is the same as the SRS verification. Thinking like this is not recommended because it only causes more confusion. It is better to consider verification as a process involving formal and technical input documents.

    Software quality assurance (SQA)

    Role testing

    Source of the article : Wikipedia

Comments
0 Comments