Saturday 1 March 2014

Overview

After you complete the design and code review of the application block, you need to test the application block to make sure it meets the functional requirements and successfully implements the functionality for the usage scenarios it was designed and implemented for.
The testing effort can be divided into two categories that complement each other:
  • Black box testing. This approach tests all possible combinations of end-user actions. Black box testing assumes no knowledge of code and is intended to simulate the end-user experience. You can use sample applications to integrate and test the application block for black box testing. You can begin planning for black box testing immediately after the requirements and the functional specifications are available.
  • White box testing. (This is also known as glass box, clear box, and open box testing.) In white box testing, you create test cases by looking at the code to detect any potential failure scenarios. You determine the suitable input data for testing various APIs and the special code paths that need to be tested by analyzing the source code for the application block. Therefore, the test plans need to be updated before starting white box testing and only after a stable build of the code is available.
    A failure of a white box test may result in a change that requires all black box testing to be repeated and white box testing paths to be reviewed and possibly changed.
The goals of testing can be summarized as follows:
  • Verify that the application block is able to meet all requirements in accordance with the functional specifications document.
  • Make sure that the application block has consistent and expected output for all usage scenarios for both valid and invalid inputs. For example, make sure the error messages are meaningful and help the user in diagnosing the actual problem.
You may need to develop one or more of the following to test the functionality of the application blocks:
  • Test harnesses, such as NUnit test cases, to test the API of the application block for various inputs
  • Prototype Windows Forms and Web Forms applications that integrate the application blocks and are deployed in simulated target deployments
  • Automated scripts that test the API of the application blocks for various inputs
This chapter examines the process of black box testing and white box testing. It includes code examples and sample test cases to demonstrate the approach for black box testing and white box testing application blocks. For the purpose of the examples illustrated in this chapter, it is assumed that functionality testing is being done for the Management Application Block (CMAB). The CMAB has already been through design and code review. The requirements for the CMAB are the following:
  • It provides the functionality to read and store configuration information transparently in a persistent storage medium. The storage mediums are SQL Server, the registry, and an XML file.
  • It provides a configurable option to store the information in encrypted form and plain text using XML notation.
  • It can be used with desktop applications and Web applications that are deployed in a Web farm.
  • It caches configuration information in memory to reduce cross-process communication, such as reading from any persistent medium. This reduces the response time of the request for any configuration information. The expiration and scavenging mechanism for the data that is cached in memory is similar to the cron algorithm in UNIX.
  • It can store and return data from various locales and cultures without any loss of data integrity.

Black Box Testing

Black box testing assumes the code to be a black box that responds to input stimuli. The testing focuses on the output to various types of stimuli in the targeted deployment environments. It focuses on validation tests, boundary conditions, destructive testing, reproducibility tests, performance tests, globalization, and security-related testing.
Risk analysis should be done to estimate the amount and the level of testing that needs to be done. Risk analysis gives the necessary criteria about when to stop the testing process. Risk analysis prioritizes the test cases. It takes into account the impact of the errors and the probability of occurrence of the errors. By concentrating on the test cases that can lead to high impact and high probability errors, the testing effort can be reduced and the application block can be ensured to be good enough to be used by various applications.
Preferably, black box testing should be conducted in a test environment close to the target environment. There can be one or more deployment scenarios for the application block that is being tested. The requirements and the behavior of the application block can vary with the deployment scenario; therefore, testing the application block in a simulated environment that closely resembles the deployment environment ensures that it is tested to satisfy all requirements of the targeted real-life conditions. There will be no surprises in the production environment. The test cases being executed ensure robustness of the application block for the targeted deployment scenarios.
For example, the CMAB can be deployed on the desktop with Windows Forms applications or in a Web farm when integrated with Web applications. The CMAB requirements, such as performance objectives, vary from the desktop environment to the Web environment. The test cases and the test environment have to vary according to the target environments. Other application blocks may have more restricted and specialized target environments. An example of an application block that requires a specialized test environment is an application block that is deployed on mobile devices and is used for synchronizing data with a central server.
As mentioned earlier, you will need to develop custom test harnesses for functionality testing purpose.

Input

The following input is required for black box testing:
  • Requirements
  • Functional specifications
  • High-level design documents
  • Application block source code
The black box testing process for an application block is shown in Figure 6.1.
Ff649503.f06mtf01(en-us,PandP.10).gif
Figure 6.1 . Black box testing process

Black Box Testing Steps

Black box testing involves testing external interfaces to ensure that the code meets functional and nonfunctional requirements. The various steps involved in black box testing are the following:
  1. Create test plans. Create prioritized test plans for black box testing.
  2. Test the external interfaces. Test the external interfaces for various type of inputs using automated test suites, such as NUnit suites and custom prototype applications.
  3. Perform load testing. Load test the application block to analyze the behavior at various load levels. This ensures that it meets all performance objectives that are stated as requirements.
  4. Perform stress testing. Stress test the application block to analyze various bottlenecks and to identify any issues visible only under extreme load conditions, such as race conditions and contentions.
  5. Perform security testing. Test for possible threats in deployment scenarios. Deploy the application block in a simulated target environment and try to hack the application by exploiting any possible weakness of the application block.
  6. Perform globalization testing. Execute test cases to ensure that the application block can be integrated with applications targeted toward locales other than the default locale used for development.
The next sections describe each of these steps.

Step 1: Create Test Plans

The first step in the process of black box testing is to create prioritized test plans. You can prepare the test cases for black box testing even before you implement the application block. The test cases are based on the requirements and the functional specification documents.
The requirements and functional specification documents help you extract various usage scenarios and the expected output in each scenario.
The detailed test plan document includes test cases for the following:
  • Testing the external interfaces with various types of input
  • Load testing and stress testing
  • Security testing
  • Globalization testing
For more information about creating test cases, see Chapter 3, "Testing Process for Application Blocks."

Step 2: Test the External Interfaces

You need to test the external interfaces of the application block using the following strategies:
  • Ensure that the application block exposes interfaces that address all functional specifications and requirements. To perform this validation testing, do the following:
    1. Prepare a checklist of all requirements and features that are expected from the application block.
    2. Create test harnesses, such as NUnit, and small "hello world"' applications to use all exposed APIs of the test application block.
    3. Run the test harnesses.
    Using NUnit, you can validate that the intended feature is working if the input is given on the expected lines.
    The sample applications can indicate whether the application block can be integrated and deployed in the target environment. The sample applications are used to test for the possible user actions for the usage scenarios; these include both the expected process flows and the random inputs. For example, a Web application deployed in a Web farm that integrates the CMAB can be used to test reading and writing information from a persistent database, such as the registry, SQL, or an XML file. You need to test the functionality by using various configuration options in the configuration file.
  • Testing for various types of inputs. After ensuring that the application block exposes the interfaces that address all of the functional specifications, you need to test the robustness of these interfaces. You need to test for the following input types:
    • Randomly generated input within a specified range
    • Boundary cases for the specified range of input
    • The number zero testing if the input is numeric
    • The null input
    • Invalid input or input that is out of the expected range
This testing ensures that the application block provides expected output for data within the specified range and gracefully handles all invalid data. Meaningful error messages should be displayed for invalid input. Boundary testing ensures that the highest and lowest permitted inputs produce expected output.
You can use NUnit for this type of input testing. Separate sets of NUnit tests can be generated for each range of input types. Executing these NUnit tests on each new build of the application block ensures that the API is able to successfully process the given input.

0 comments :

Post a Comment