Saturday 1 March 2014

Application Block Code Review

This section describes the steps involved in performing a code review for an application block.

Input

The following input is required for a code review:
  • Requirements (use cases, functional specifications, deployment scenarios, and security-related requirements for the target deployments)
Design-related documents (architecture diagrams and class interaction diagrams)

Code Review Steps

The process for an application block code review is shown in Figure 5.1.
Ff649506.f05mtf01(en-us,PandP.10).gif
Figure 5.1. The code review process for application blocks
As shown in Figure 5.1, application block code review involves the following steps:
  1. Create test plans. Create test plans that list all test cases and execution details from a code review perspective.
  2. Ensure that the implementation is in accordance with the design. The implementation should adhere to the design decided on in the architecture and design phase.
  3. Ensure that naming standards are followed. The naming standards for assemblies, namespaces, classes, methods, and variables should be in accordance with the guidelines specified for the Microsoft® .NET Framework.
  4. Ensure that commenting standards are followed. The comments in the implementation should adhere to the standards for the language used for developing the application block.
  5. Ensure that performance and scalability guidelines are followed. The code should follow the implementation best practices for .NET Framework. This optimizes performance and scalability.
  6. Ensure that guidelines for writing secure code are followed. The code should follow the implementation best practices. This results in hack-resistant code.
  7. Ensure that globalization-related guidelines are followed. The code should follow globalization-related best practices in such a way that the application block can be easily localized for different locales.
  8. Validate exception handling in the code. The goal of exception handling should be to provide useful information to end users and administrators. This minimizes unnecessary exceptions at the same time.
  9. Identify the scenarios for more testing. During the white box testing phase, identify the scenarios that need more testing.

Unit Testing a Customized Application Block

Application blocks may need to be customized based on the requirements of the application. After the application block is customized, it must be unit tested to make sure that the customizations do not break the functionality of existing features. Unit testing also ensures that the customizations are in accordance with the requirements of the application that the block is to be integrated with.
For example, in the case of the online bookstore sample application, the CMAB is required to read and write configuration data to and from an Oracle data store. Therefore, the CMAB has to be customized to manage configuration data stored in an Oracle database. The unit testing is based on the process described in Chapter 3, "Testing Process for Application Blocks."
The following are required for planning the unit tests for the customized application block:
  • Requirements and functional specifications for the CMAB and the customizations to be made to the CMAB.
  • Performance targets for the sample application use cases using the CMAB.
  • Deployment architecture for the sample application.
The unit testing process for the CMAB includes the following steps:
  1. Create test plans.
  2. Review the design.
  3. Review the implementation.
  4. Perform black box testing.
  5. Perform white box testing.
  6. Regression test the existing functionality.


Integration Testing an Application Block

Integration testing is the logical extension of unit testing. You may be using the application block as packaged, or you may have customized it before integrating it in your application. In either case, you need to conduct integration testing to ensure that the application block can meet the application-specific requirements. Integration testing ensures the following:
  • The interfaces between the integrated units of the application and the application block are able to interact in accordance with the specifications.
  • The modules of the application with which the application block has been integrated meet all of the functional requirements, performance objectives, globalization objectives, security objectives, and so on.
Integration testing is important because a piece of code that functions correctly when it is tested as a separate unit can demonstrate problems when it is integrated into the actual application. For example, an incompatible data type could be provided by the application to the application block, or data storage could be locked by some part of the application. In the worst scenario, you may discover issues such as the application block being unable to meet the performance objectives of the application or a loss of data at particular times, such as when the block passes decimals with accuracy to 10 digits of the decimal.
However, in most situations you will discover issues such as improper invocation of the interfaces exposed by the application block, missing error handling, failure to trap all of the exceptions thrown by the application block, incorrect data in configuration files, or mismatch of data types leading to loss of data. The majority of the errors that are discovered during the integration phase of the application block occur at the interface between the application block and the application.
Note that the integration testing of the application block is part of the overall testing process for the application itself. The application may also be integrating the other modules, and the following process is not a substitute for any other kind of testing that needs to be done for the application. The following process focuses on testing the integration of the application block with the application.
For the purposes of illustration, the CMAB is assumed to be integrated with the sample application, which requires configuration data to be stored in an Oracle database. Therefore, a custom storage provider has to be added to the CMAB for reading and writing configuration data from and to an Oracle database. The sample application allows users to personalize the Web site and order a book online.

Input for Integration Testing

The following are required for integration testing of an application block:
  • Functional specifications of the online bookstore sample application and application block
  • Requirements for the sample application and application block
  • Performance objectives for the sample application
  • Deployment scenarios for the sample application

Steps for Integration Testing

The integration testing process for an application block is shown in Figure 9.2.
Ff649495.f09mtf02(en-us,PandP.10).gif

Performance Objectives

Performance objectives are captured in the requirements phase and early design phase of the application life cycle. All performance objectives, resource budget data, key usage scenarios, and so on, are captured as a part of the performance modeling process. The performance modeling artifact serves as important input to the performance testing process. In fact, performance testing is a part of the performance modeling process; you may update the model depending on the application life cycle phase in which you are executing the performance tests.
The performance objectives may include some or all of the following:
  • Workload. If the application block is to be integrated with a server-based application, it will be subject to a certain load of concurrent and simultaneous users. The requirements may explicitly specify the number of concurrent users that should be supported by the application block for a particular operation. For example, the requirements for an application block may be 200 concurrent users for one usage scenario and 300 concurrent users for another usage scenario.
  • Response time. If the application block is to be integrated with a server-based application, the response time objective is the time it takes to respond to a request for the peak targeted workload on the server. The response time can be measured in terms of Time to First Byte (TTFB) and Time to Last Byte (TTLB). The response time depends on the load that is on the server and the network bandwidth over which the client makes a request to the server. The response time is specified for different usage scenarios of the application block. For example, a write feature may have a response time of less than 4 seconds; whereas a read scenario may have a response time of less than 2 seconds for the peak load scenario.
  • Throughput. Throughput is the number of requests that can be served by the application per unit time. A simple application that integrates the application block is supposed to process requests for the targeted workload within the response time goal. This goal can be translated as the number of requests that should be processed per unit time. For an ASP.NET Web application, you can measure this value by monitoring the ASP.NET\Request/secperformance counter. You can measure the throughput in other units that help you to effectively monitor the performance of the application block; for example, you can measure read operations per second and write operations per second.
  • Resource utilization budget. The resource utilization cost is measured in terms of server resources, such as CPU, memory, disk I/O, and network I/O. The resource utilization budget is the amount of resources consumed by the application block at peak load levels. For example, the processor overhead of the application block should not be more than 10 percent.

Stress Testing

Stress testing an application block means subjecting it to load beyond the peak operating capacity and at the same time denying resources that are required to process the load. An example of stress testing is hosting the application that uses the application block on a server that already has a process utilization of more than 75 percent because of existing applications, and subjecting the application block to a concurrent load above the peak operating capacity.
The goal of stress testing is to evaluate how the application block responds under such extreme conditions. Stress testing helps to identify problems that occur only under high load conditions. Stress testing application blocks identifies problems such as memory leaks, resource contentions, and synchronization issues.
Stress testing uses the analysis from load testing. The test scenarios and the maximum operating capacities are obtained from load testing.
The stress testing approach can be broadly classified into two types: sustained testing and maximal testing. The difference is usually the time the stress test is scheduled to run for, because a sustained stress test usually has a longer execution time than a maximal stress test. In fact, stress testing can accomplish its goals by intensity or quantity. A maximal stress test tends to concentrate on intensity; in other words, it sets up much more intense situations than would otherwise be encountered but it attempts to do so in a relatively short period of time. For example, a maximal stress test may have 500 users concurrently initiating a very data-intensive search query. The intensity is much greater than a typical scenario. Conversely, a sustained stress load tends to concentrate on quantity because the goal is to run much more in terms of the number of users or functionality, or both, than would usually be encountered. So, for example, a sustained stress test would be to have 2000 users run an application designed for 1000 users.

Input

The following input is required for stress testing an application block
  • Performance model (workload characteristics, performance objectives, key usage scenarios, resource budget allocations)
  • Potential problematic scenarios from the performance model and load testing
  • Peak load capacity from load testing

Stress Testing Steps

Stress testing includes the following steps:
  1. Identify key scenarios. Identify test scenarios that are suspected to have potential bottlenecks or performance problems, using the results of the load-testing process.
  2. Identify workload. Identify the workload to be applied to the scenarios identified earlier using the workload characteristics from the performance model, the results of the load testing, and the workload profile used in load testing.
  3. Identify metrics. Identify the metrics to be collected when stress testing application blocks. The metrics are now identified to focus on potential performance problems that may be encountered during the testing process.
  4. Create test cases. Create the test cases for the key scenarios identified in Step 1.
  5. Simulate load. Use load-generating tools to simulate the load to stress test the application block as specified in the test case, and use the performance monitoring and measuring tools and the profilers to capture the metrics.
  6. Analyze the results. Analyze the results from the perspective of diagnosing the potential bottlenecks and problems that occur only under continuous extreme load condition and report them in a proper format.
The next sections describe each of these steps.

Step 1: Identify Key Scenarios

Identify scenarios from the test cases used for load testing that may have a performance problem under high load conditions.
To stress test the application block, identify the test scenarios that are critical from the performance perspective. Such scenarios are usually resource-intensive or frequently used. These scenarios may include functionalities such as the following:
  • Synchronizing access to particular code that can lead to resource contention and possible deadlocks
  • Frequent object allocation in various scenarios, such as developing a custom caching solution, and creating unmanaged objects
For example, in the case of the CMAB, the test scenarios that include caching data and writing to a data store such as file are the potential scenarios that need to be stress tested for memory leaks and synchronization issues, respectively.

Step 2: Identify Workload

Identify the workload for each of the performance-critical scenarios. Choose a workload that stresses the application block sufficiently beyond the peak operating capacity.
You can capture the peak operating capacity for a particular profile from the load testing process and incrementally increase the load and observe the behavior at various load conditions. For example, in the case of the CMAB, if the peak operating capacity for a writing to a file scenario is 150 concurrent users, you can start the stress testing by incrementing the load with a delta of 50 or 100 users and analyze the application block's behavior.

Step 3: Identify Metrics

Identify the metrics that help you to analyze the bottlenecks and the metrics related to your performance objectives. When load testing, you may add a wide range of metrics (during the first or subsequent iterations) to detect any possible performance problems, but when stress testing, the metrics monitored are focused on a single problem. 

Load Testing

Load testing analyzes the behavior of the application block with workload varying from normal to peak load conditions. This allows you to verify that the application block is meeting the desired performance objectives.

Input

The following input is required toload test an application block:
  • Performance model( workload characteristics, performance objectives, and resource budget allocations)
  • Test plans

Load Testing Steps

Load testing involves six steps:
  1. Identify key scenarios. Identify performance-critical scenarios for the application block.
  2. Identify workload. Distribute the total load among the various usage scenarios identified in Step 1.
  3. Identify metrics. Identify the metrics to be collected when executing load tests.
  4. Create test cases. Create the test cases for load testing of the scenarios identified in Step 1.
  5. Simulate load. Use the load-generating tools to simulate the load for each test case, and use the performance monitoring tools (and in some cases, the profilers) to capture the metrics.
  6. Analyze the results. Analyze the data from the performance objectives as the benchmark. The analysis also identifies potential bottlenecks.
The next sections describe each of these steps.

Step 1: Identify Key Scenarios

Generally, you should start by identifying scenarios that can have a significant performance impact or that have explicit performance goals. In the case of application blocks, you should prepare a prioritized list of usage scenarios, and all of these scenarios should be tested.
In the case of the CMAB, the two major functionalities are reading and writing configuration data. The CMAB functionalities can be extended more based on various scenarios, such as whether caching is enabled or disabled, usage of different data stores, or different encryption providers, and so on. Therefore, the load-testing scenarios for the CMAB are the combinations of all the configuration options. The following are some of the scenarios for the CMAB:
  • Read a declared configuration section from a file store with caching disabled and data encryption enabled.
  • Write configuration data to a file store with encryption enabled.
  • Read configuration data from a SQL store with caching and data encryption enabled.
  • Write configuration data to a SQL store with data encryption enabled.
  • Initialize the Configuration Manager for the first time when the Configuration Manager is performing user operations.
For the CMAB, performance degradation probability is high in a case where data must be written to a file store, because concurrent write operations are not supported on a file and the response time is expected to be greater in this case.

Step 2: Identify Workload

In this step, you identify the workload for each scenario or distribute the total workload among the scenarios. Workload allocation involves specifying the number of concurrent users that are involved in a particular scenario, the rate of requests, and the pattern of requests. You may have a workload defined for each usage scenario in terms of concurrent users (that is, all users firing requests at a given instant without any sleep time between requests). For example, the CMAB has a targeted workload of 200 concurrent users for a read operation on the SQL store with caching disabled and encryption enabled.
In most real-world scenarios, the application block may be performing parallel execution of multiple operations from different scenarios. You may therefore want to analyze how the application block performs with a particular workload profile that is a mix of various scenarios for a given load of simultaneous users (that is, all users have active connections, and all of them may not be firing requests at same time), with two consecutive requests separated by specific think time (that is, the time spent by the user between the two consecutive requests).
  • Identify the maximum number of simultaneous users accessing each of the usage scenarios in isolation. This number is based on the performance objectives identified during performance modeling. For example, in the case of the CMAB, the expected load of users is 1,200 simultaneous users.
  • Identify the expected mix of usage scenarios. In most real-world server–based applications, a mix of application block usage scenarios might be accessed by various users. You should identify each mix by naming each one as a unique profile. Identify the number of simultaneous users for each scenario and the pattern in which the users have to be distributed across test scenarios. Distribute the workload based on the requirements that the application blocks have been designed for. For example, the CMAB is optimized for situations where read operations outnumber write operations; it is not meant for online transaction processing (OLTP)–style applications. Therefore, the workload has to be distributed so only a small part of the workload is allocated for write operations. Group users together into user profiles based on the key scenarios they participate in.
    For example, in the case of the CMAB, for any given database store, there will be a read profile and a write profile. Each profile has its respective use case as the dominant one. A sample composition of a read profile for a SQL store is shown in Table 8.1. The table assumes that out of a total workload of 1,000 simultaneous users, 600 users are using the SQL store.

Testing Process for Globalization

The following process formalizes the set of activities that are required to ensure a world-ready application block. The process can be easily customized to suit specific needs for an application block by creating test plans that are specific to particular scenarios.

Input

The following input is required for the globalization testing process:
  • Functional specifications of the application block
  • Requirements for the application block
  • Deployment scenarios for the application

Steps

The process for globalization testing is shown in Figure 7.1.
Ff649500.f07mtf01(en-us,PandP.10).gif
Figure 7.1. The testing process for application blocks
The globalization testing process consists of the following steps:
  1. Create test plans. Create test plans based on the priority of each scenario and the test platforms it needs to be tested on.
  2. Create the test environment. Set up the test environment for multiple locales that are targeted for the application block.
  3. Execute the test cases. Execute the test cases in the test environment.
  4. Analyze the results. Analyze the results to ensure that there is no data loss or inconsistency in the output.
The next sections describe each of these steps.

Step 1: Create Test Plans

You must create test plans for the application block scenarios that you must test for globalization. In general, you can develop the test cases (detailed test plan document) and the execution details for each test case (detailed test case document) based on the functional specifications and the requirements document. The functional specification document describes the various interfaces that the application block will expose. The requirements document specifies whether the application block is targeted toward a specific set of locales or whether it is just required to be globalization-compliant.
When you create the test plans, you should:
  • Decide the priority of each scenario.
  • Select the test platform.
The next sections describe each of these considerations.

Decide the Priority of Each Scenario

To make globalization testing more effective, it is a good idea to assign a priority to each scenario that must be tested. You should check for the following to identify the high priority scenarios:
  • Check if the application block needs to support text data in ANSI format.
  • Check if the application block extensively processes strings by performing tasks such as sorting, comparison, concatenation, and transformation (conversion to lowercase or uppercase).
  • Check if there are certain APIs that can accept locale-specific information such as address, currency, dates, and numerals.
  • Use files for data storage or data exchange (for example, Microsoft Windows metafiles, security configuration tools, and Web-based tools).
In the case of the CMAB, the primary usage functionality is to store and retrieve configuration information from various data stores. Some of the globalization-related testing should test the following high priority scenarios in the CMAB to make sure that the data is stored and retrieved without any loss of integrity and consistency:
  • The CMAB is able to store locale-specific information such as currency, dates, and strings in the supported data stores without any loss of data integrity.
  • CMAB is able to return the locale-specific information in its original form if the locale of the user requesting the information is the same as the locale of the stored data. There may be a requirement that if users from diverse locales are accessing the information, the information should be converted to the user's locale before it is returned. This requires development of custom handlers that can deserialize the data to meet such a requirement for a particular scenario. However, the globalization testing needs to ensure that such customization is possible with the application block.
    Consider the scenario where the CMAB is integrated with an online chat application that can be accessed from different time zones. The Web server and the application clients fall under different time zones in such a way that the local time at the server and the local time at the client are different. During testing, you should ensure that the custom handlers convert the local time (local data) from the user into universal time (universal data) before storing it as configuration values. In this way, each user sees what time other users logged in according to his or her local time zone.

    The stored configuration values can be accessed using the CMAB and displayed in the local time, as and when required. If a user from a different time zone requests the same configuration information, the value can be converted to his or her time zone.
  • Assuming that the CMAB provides out-of-the-box support for a fixed number of locales by providing satellite assemblies for resources, you should test all scenarios that result in an exception. Testing these scenarios ensures that the exception messages from the CMAB are on expected lines without any truncation or deformation (loss of characters or the introduction of junk characters) of the actual string.

Select a Test Platform

Identify the operating system that testing is to be performed on. If the requirements explicitly state that the application block needs to support a specific set of cultures, you have the following options for choosing a test platform:
  • Use the local build of the operating system. You can use the local build of the operating system, such as the U.S. build, and install different language groups. The application that is used to test the application block can then change the current UI culture and test that the exception messages and other data returned by the application block are in accordance with the current UI culture.
  • Use the Multilanguage User Interface (MUI) operating system. The user can change the language that the UI of the operating system will be displayed in and test the application integrating the application block. The application block should also be able to return the error messages and other data based on the current culture settings. This approach is easier than installing multiple localized versions of the operating system.
  • Use the localized build of the target operating system. This approach does not have a significant advantage over the preceding options.
If you do not have an explicit requirement for the locales that must be supported by the application block, you can test by installing a minimum of two language groups from diverse regions, such as Japanese and German. This ensures that the application block is able to support locales from diverse cultures.

Step 2: Create the Test Environment

As mentioned earlier, to perform globalization testing, you must install multiple language groups on the test computers. After you install the language groups,, make sure that the culture or locale is not your local culture or locale.
You should make sure that the locale of the server that the test harness for the application block is hosted on is not the same locale as that of the test computers.
For the culture or locale example (the second test scenario in Step 1), configure the following:
  • Server. Install U.S.-English language and time zone support.
  • UserA's system. Install German language and time zone support.
  • UserB's system. Install U.K. English and time zone support.

Step 3: Execute Tests

After the environment is set for globalization testing, you should focus on potential globalization problems when you run your functional and integration test cases. Consider the following guidelines for the test cases to be executed:
  • Put greater emphasis on test cases that deal with passing parameters to the application block. For the sample scenario considered earlier, you can consider test cases that determine the correct selection of the culture or locale in accordance with the user's location and choice of environment. It focuses on the following code for the preceding example.
      
  • Focus on test cases that deal with the input and output of strings, directly or indirectly.
  • During testing, you should use test data that contains mixed characters from various languages and different time zones.
You can create automated test cases by using a test framework such as NUnit. The test stubs can focus on passing various types of input to the application block API. In this way, you automate the execution of test cases and ensure that each new build of the application block during the development cycle is world-ready.

Step 4: Analyze the Results

The tests may reveal that the functionality of the application block is not working as intended for different locales. In the worst-case scenario, the functionality may fail completely, but in most of the scenarios, you may have issues similar to the following:
  • Random appearance of special characters, such as question marks, ANSI characters, vertical bars, boxes, and tildes
  • Incorrect formatting of data, such as date and currency, in the return values from the application block
  • Error message text that does not appear in accordance with the current locale setting
Each of these issues has a different root cause. For example, the appearance of boxes or vertical bars indicates that the selected font cannot display some of the characters; the appearance of question marks indicates problems with Unicode-to-ANSI conversion.

Usually, a simple code review of the module reveals mistakes such as hard-coded strings, misuse of an overloaded API that takes the culture or locale related inputs, or an incorrectly set culture-related property for the thread that the call is being executed in. There may be other scenarios where the code converts strings from lowercase to uppercase before performing a case-insensitive comparison. This may produce unexpected results for certain languages, such as Chinese and Japanese, that do not have the concept of uppercase and lowercase characters.

White Box Testing

White box testing assumes that the tester can take a look at the code for the application block and create test cases that look for any potential failure scenarios. During white box testing, you analyze the code of the application block and prepare test cases for testing the functionality to ensure that the class is behaving in accordance with the specifications and testing for robustness.

Input

The following input is required for white box testing:
  • Requirements
  • Functional specifications
  • High-level design documents
  • Detailed design documents
  • Application block source code

White Box Testing Steps

The white box testing process for an application block is shown in Figure 6.2.
Ff649503.f06mtf02(en-us,PandP.10).gif
Figure 6.2. White box testing process
White box testing involves the following steps:
  1. Create test plans. Identify all white box test scenarios and prioritize them.
  2. Profile the application block. This step involves studying the code at run time to understand the resource utilization, time spent by various methods and operations, areas in code that are not accessed, and so on.
  3. Test the internal subroutines. This step ensures that the subroutines or the nonpublic interfaces can handle all types of data appropriately.
  4. Test loops and conditional statements. This step focuses on testing the loops and conditional statements for accuracy and efficiency for different data inputs.
  5. Perform security testing. White box security testing helps you understand possible security loopholes by looking at the way the code handles security.
The next sections describe each of these steps.

Step 1: Create Test Plans

The test plans for white box testing can be created only after a reasonably stable build of the application block is available. The creation of test plans involves extensive code review and input from design review and black box testing. The test plans for white box testing include the following:
  • Profiling, including code coverage, resource utilization, and resource leaks
  • Testing internal subroutines for integrity and consistency in data processing
  • Loop testing; test simple, concatenated, nested, and unstructured loops
  • Conditional statements, such as simple expressions, compound expressions, and expressions that evaluate to Boolean.
  1. For more information about creating test cases, see Chapter 3, "Testing Process for Application Blocks."

Step 2: Profile the Application Block

Profiling allows you to monitor the behavior of a particular code path at run time when the code is being executed. Profiling includes the following tests:
  • Code coverage. Code coverage testing ensures that every line of code is executed at least once during testing. You must develop test cases in a way that ensures the entire execution tree is tested at least once. To ensure that each statement is executed once, test cases should be based on the control structure in the code and the sequence diagrams from the design documents. The control structures in the code consist of various conditions as follows:
    • Various conditional statements that branch into different code paths. For example, a Boolean variable that evaluates to "false" or "true" can execute different code paths. There can be other compound conditions with multiple conditions, Boolean operators, and bit-wise comparisons.
    • Various types of loops, such as simple loops, concatenated loops, and nested loops.
    There are various tools available for code coverage testing, but you still need to execute the test cases. The tools identify the code that has been executed during the testing. In this way, you can identify the redundant code that never gets executed. This code may be left over from a previous version of the functionality or may signify a partially implemented functionality or dead code that never gets called.
    Tables 6.3 and 6.4 list sample test cases for testing the code coverage of ConfigurationManager class of the CMAB.
    Table 6.3: The CMAB Test Case Document for Testing the Code Coverage for InitAllProvider Method and All Invoked Methods
Scenario 1.3Test the code coverage for the method InitAllProviders()in ConfigurationManager class.
PriorityHigh
Execution detailsCreate a sample application for reading configuration data from a data store through the CMAB.
Run the application under the following conditions:
With a default section present
Without a default section
Trace the code coverage using an automated tool.
Report any code not being called in InitAllProviders().
Tools requiredCustom test harness integrating the application block for reading configuration data.
Expected resultsThe entire code for InitAllProviders()method and all the invoked methods should be covered under the preceding conditions.
Table 6.4: The CMAB Test Case Document for Testing the Code Coverage for Read Method and All Invoked Methods
Scenario 1.4Test the code coverage for the method Read (sectionName) in the ConfigurationManager class.
PriorityHigh
Execution detailsCreate a sample application for reading configuration data from SQL database through the CMAB.
Run the application under the following conditions:
Give a null section name or a section name of zero length to the Read method.
Read a section whose name is not mentioned in the App.config or Web.config files.
Read a configuration section that has cache enabled.
Read a configuration section that has cache disabled.
Read a configuration section successfully with the cache disabled, and then disconnect the database and read the section again.
Read a configuration section with the section having no configuration data in the database.
Read the configuration section that does not have provider information mentioned in the App.config or Web.config files.
Trace the code coverage.
Report any code left not being covered in the Read (sectionName) method.
Tools requiredCustom test harness integrating the application block for reading of configuration data.
Expected resultsThe entire code for the Read (sectionName) method and the invoked methods should be covered under the preceding conditions.
  • Memory allocation pattern. You can profile the memory allocation pattern of the application block by using code profiling tools. You need to check for the following in the allocation pattern:
    • The percentage of allocations in Gen 0, Gen 1, and Gen 2. If the percentage of objects in Gen 2 is high, the resource cleanup in the application block is not efficient and there are memory leaks. This probably means the objects are held up longer than required (this may be expected in some scenarios). Profiling the application blocks gives you an idea of the type of objects that are being promoted to Gen 2 of the heap. You can then focus on analyzing the culprit code snippet and rectify the problem.
      An efficient allocation pattern should have most of the allocations in Gen 0 and Gen 1 over a period of time.
      There might be certain objects, such as a pinned pool of reusable buffers used for I/O work, that are promoted to Gen 2 when the application starts. The faster this pool of buffers gets promoted to Gen 2, the better.
    • The fragmentation of the heap. The heap fragmentation happens most often in scenarios where the objects are pinned and cannot be moved. The memory cannot be efficiently compacted around these objects. The longer these objects are pinned, the greater the chances of heap fragmentation. As mentioned earlier, there might be a pool of buffers that needs to be used for I/O calls. If these objects are initialized when the application starts, they quickly move the Gen 2, where the overhead of heap allocation is largely removed.
    • "Side effect" allocations. Large number of side effect allocations take place because of some calls in a loop or recursive functions, such as the calls to string-related functions String.ToLower()or concatenation using the + operator happening in a loop. This causes the original string to be discarded and a new string to be allocated for each such operation. These operations in a loop may cause significant increase in memory consumption.
    You can also analyze memory leaks by using debugging tools, such as WinDbg from the Windows Resource Kit. Using these tools, you can analyze the heap allocations for the process.

  • Cost of serialization. There may be certain scenarios when the application block needs to serialize and transmit data across processes or computers. Serializing data involves memory overhead that can be quite significant, depending on the amount of data and the type of serializer or formatter used for serialization. You need to instrument your code to take the snapshots of memory utilized by the garbage collector before and after serialization.

  • Contention and deadlock issues. Contention and deadlock issues mostly surface under high load conditions. The input from load testing (during black box testing) give you information about the potential execution paths where contention and deadlocks issues are suspected. For example, in the case of the CMAB, you may suspect a deadlock if you see the requests timing out when trying to update a particular information in the persistent medium.
    You need to analyze these issues with invasive profiling techniques, such as using WindDbg tool, in the production environment on a live process or by analyzing the stack dumps of the process.

  • Time taken for executing a code path. For scenarios where performance is critical, you can profile the time they take. Timing a code path may require custom instrumentation of the appropriate code. There are also various tools available that help you measure the time it takes for a particular scenario to execute by automatically creating instrumented assemblies of the application block. The profiling for time taken may be for complete execution of a usage scenario, an internal function, or even a particular loop within a function.

  • Profiling for excessive resource utilization. The input from a performance test may show excessive resource utilization, such as CPU, memory, disk I/O, or network I/O, for a particular usage scenario. But you may need to profile the code to track the piece of code that is blocking resources disproportionately. This might be an expected behavior for a particular scenario in some circumstances. For example, an empty while loop may pump up the processor utilization significantly and is something you should track and rectify; whereas, a computational logic that involves complex calculations may genuinely warrant high processor utilization.

Step 3: Test the Internal Subroutines

Thoroughly test all internal subroutines for every type of input. The subroutines that are internally called by the public API to process the input may be working as expected for the expected input types. However, after a thorough code review, you may notice that there are some expressions that may fail for certain types of input. This warrants the testing of internal methods and subroutines by developing NUnit tests for internal functions after a thorough code review. Following are some examples of potential pitfalls:
  • The code analysis reveals that the function may fail for a certain input value. For example, a function expecting numeric input may fail for an input value of 0.
  • In the case of the CMAB, the function reads information from the cache. The function returns the information appropriately if the cache is not empty. However, if during the process of reading, the cache is flushed or refreshed, the function may fail.
  • The function may be reading values in a buffer before returning them to the client. Certain input values might result in a buffer overflow and loss of data.
  • The subroutine does not handle an exception where the remote call to a database is not successful. For example, in the CMAB, if the function is trying the update the SQL Server information but the SQL Server database is not available, it does not log the application in the appropriate event sink.

Step 4: Test Loops and Conditional Statements

The application block may contain various types of loops, such as simple, nested, concatenated, and unstructured loops. Although unstructured loops require redesigning, the other types of loops require extensive testing for various inputs. Loops are critical to the application block performance because they magnify seemingly trivial problems by iterating through the loop multiple times.
Some of the common errors could cause the loop to execute infinite times. This could result in excessive CPU or memory utilization resulting in the application failing. Therefore, all loops in the application block should be tested for the following conditions:
  • Provide input that results in executing the loop zero times. This can be achieved where the input to the lower bound value of the loop is less than the upper bound value.
  • Provide input that results in executing the loop one time. This can be achieved where the lower bound value and upper bound value are the same.
  • Provide input that results in executing the loop a specified number of times within a specific range.
  • Provide input that the loop might iterate nn-1, and n+1 times. The out-of-bound loops (n-1 and n+1) are very difficult to detect with a simple code review; therefore, there is a need to execute special test cases that can simulate such cases.
When testing nested loops, you can start by testing the innermost loop, with all other loops set to iterate a minimum number of times. After the innermost loop is tested, you can set it to iterate a minimum number of times, and then test the outermost loop as if it was a simple loop.
Also, all of the conditional statements should be completely tested. The process of conditional testing ensures that the controlling expressions have been exercised during testing by presenting the evaluating expression with a set of input values. The input values ensure that all possible outcomes of the expressions are tested for expected output. The conditional statements can be a relational expression, a simple condition, a compound condition, or a Boolean expression.

Step 5: Perform Security Testing

White box security testing focuses on identifying test scenarios and testing based on knowledge of implementation details. During code reviews, you can identify areas in code that validate data, handle data, access resources, or perform privileged operations. Test cases can be developed to test all such areas. Following are some examples:
  • Validation techniques can be tested by passing negative value, null value, and so on, to make sure the proper error message displays.
  • If the application block handles sensitive data and uses cryptography, then based on knowledge from code reviews, test cases can be developed to validate the encryption technique or cryptography methods.

Overview

After you complete the design and code review of the application block, you need to test the application block to make sure it meets the functional requirements and successfully implements the functionality for the usage scenarios it was designed and implemented for.
The testing effort can be divided into two categories that complement each other:
  • Black box testing. This approach tests all possible combinations of end-user actions. Black box testing assumes no knowledge of code and is intended to simulate the end-user experience. You can use sample applications to integrate and test the application block for black box testing. You can begin planning for black box testing immediately after the requirements and the functional specifications are available.
  • White box testing. (This is also known as glass box, clear box, and open box testing.) In white box testing, you create test cases by looking at the code to detect any potential failure scenarios. You determine the suitable input data for testing various APIs and the special code paths that need to be tested by analyzing the source code for the application block. Therefore, the test plans need to be updated before starting white box testing and only after a stable build of the code is available.
    A failure of a white box test may result in a change that requires all black box testing to be repeated and white box testing paths to be reviewed and possibly changed.
The goals of testing can be summarized as follows:
  • Verify that the application block is able to meet all requirements in accordance with the functional specifications document.
  • Make sure that the application block has consistent and expected output for all usage scenarios for both valid and invalid inputs. For example, make sure the error messages are meaningful and help the user in diagnosing the actual problem.
You may need to develop one or more of the following to test the functionality of the application blocks:
  • Test harnesses, such as NUnit test cases, to test the API of the application block for various inputs
  • Prototype Windows Forms and Web Forms applications that integrate the application blocks and are deployed in simulated target deployments
  • Automated scripts that test the API of the application blocks for various inputs
This chapter examines the process of black box testing and white box testing. It includes code examples and sample test cases to demonstrate the approach for black box testing and white box testing application blocks. For the purpose of the examples illustrated in this chapter, it is assumed that functionality testing is being done for the Management Application Block (CMAB). The CMAB has already been through design and code review. The requirements for the CMAB are the following:
  • It provides the functionality to read and store configuration information transparently in a persistent storage medium. The storage mediums are SQL Server, the registry, and an XML file.
  • It provides a configurable option to store the information in encrypted form and plain text using XML notation.
  • It can be used with desktop applications and Web applications that are deployed in a Web farm.
  • It caches configuration information in memory to reduce cross-process communication, such as reading from any persistent medium. This reduces the response time of the request for any configuration information. The expiration and scavenging mechanism for the data that is cached in memory is similar to the cron algorithm in UNIX.
  • It can store and return data from various locales and cultures without any loss of data integrity.

Black Box Testing

Black box testing assumes the code to be a black box that responds to input stimuli. The testing focuses on the output to various types of stimuli in the targeted deployment environments. It focuses on validation tests, boundary conditions, destructive testing, reproducibility tests, performance tests, globalization, and security-related testing.
Risk analysis should be done to estimate the amount and the level of testing that needs to be done. Risk analysis gives the necessary criteria about when to stop the testing process. Risk analysis prioritizes the test cases. It takes into account the impact of the errors and the probability of occurrence of the errors. By concentrating on the test cases that can lead to high impact and high probability errors, the testing effort can be reduced and the application block can be ensured to be good enough to be used by various applications.
Preferably, black box testing should be conducted in a test environment close to the target environment. There can be one or more deployment scenarios for the application block that is being tested. The requirements and the behavior of the application block can vary with the deployment scenario; therefore, testing the application block in a simulated environment that closely resembles the deployment environment ensures that it is tested to satisfy all requirements of the targeted real-life conditions. There will be no surprises in the production environment. The test cases being executed ensure robustness of the application block for the targeted deployment scenarios.
For example, the CMAB can be deployed on the desktop with Windows Forms applications or in a Web farm when integrated with Web applications. The CMAB requirements, such as performance objectives, vary from the desktop environment to the Web environment. The test cases and the test environment have to vary according to the target environments. Other application blocks may have more restricted and specialized target environments. An example of an application block that requires a specialized test environment is an application block that is deployed on mobile devices and is used for synchronizing data with a central server.
As mentioned earlier, you will need to develop custom test harnesses for functionality testing purpose.

Input

The following input is required for black box testing:
  • Requirements
  • Functional specifications
  • High-level design documents
  • Application block source code
The black box testing process for an application block is shown in Figure 6.1.
Ff649503.f06mtf01(en-us,PandP.10).gif
Figure 6.1 . Black box testing process

Black Box Testing Steps

Black box testing involves testing external interfaces to ensure that the code meets functional and nonfunctional requirements. The various steps involved in black box testing are the following:
  1. Create test plans. Create prioritized test plans for black box testing.
  2. Test the external interfaces. Test the external interfaces for various type of inputs using automated test suites, such as NUnit suites and custom prototype applications.
  3. Perform load testing. Load test the application block to analyze the behavior at various load levels. This ensures that it meets all performance objectives that are stated as requirements.
  4. Perform stress testing. Stress test the application block to analyze various bottlenecks and to identify any issues visible only under extreme load conditions, such as race conditions and contentions.
  5. Perform security testing. Test for possible threats in deployment scenarios. Deploy the application block in a simulated target environment and try to hack the application by exploiting any possible weakness of the application block.
  6. Perform globalization testing. Execute test cases to ensure that the application block can be integrated with applications targeted toward locales other than the default locale used for development.
The next sections describe each of these steps.

Step 1: Create Test Plans

The first step in the process of black box testing is to create prioritized test plans. You can prepare the test cases for black box testing even before you implement the application block. The test cases are based on the requirements and the functional specification documents.
The requirements and functional specification documents help you extract various usage scenarios and the expected output in each scenario.
The detailed test plan document includes test cases for the following:
  • Testing the external interfaces with various types of input
  • Load testing and stress testing
  • Security testing
  • Globalization testing
For more information about creating test cases, see Chapter 3, "Testing Process for Application Blocks."

Step 2: Test the External Interfaces

You need to test the external interfaces of the application block using the following strategies:
  • Ensure that the application block exposes interfaces that address all functional specifications and requirements. To perform this validation testing, do the following:
    1. Prepare a checklist of all requirements and features that are expected from the application block.
    2. Create test harnesses, such as NUnit, and small "hello world"' applications to use all exposed APIs of the test application block.
    3. Run the test harnesses.
    Using NUnit, you can validate that the intended feature is working if the input is given on the expected lines.
    The sample applications can indicate whether the application block can be integrated and deployed in the target environment. The sample applications are used to test for the possible user actions for the usage scenarios; these include both the expected process flows and the random inputs. For example, a Web application deployed in a Web farm that integrates the CMAB can be used to test reading and writing information from a persistent database, such as the registry, SQL, or an XML file. You need to test the functionality by using various configuration options in the configuration file.
  • Testing for various types of inputs. After ensuring that the application block exposes the interfaces that address all of the functional specifications, you need to test the robustness of these interfaces. You need to test for the following input types:
    • Randomly generated input within a specified range
    • Boundary cases for the specified range of input
    • The number zero testing if the input is numeric
    • The null input
    • Invalid input or input that is out of the expected range
This testing ensures that the application block provides expected output for data within the specified range and gracefully handles all invalid data. Meaningful error messages should be displayed for invalid input. Boundary testing ensures that the highest and lowest permitted inputs produce expected output.
You can use NUnit for this type of input testing. Separate sets of NUnit tests can be generated for each range of input types. Executing these NUnit tests on each new build of the application block ensures that the API is able to successfully process the given input.