应邀自动化测试研讨会(STAP2011)

软件测试自动化是软件测试研究中的一个重要主题,它涉及到测试模型、测试生成算法、测试执行自动化、测试结果分析等问题。为了促进国内同行在测试方面的交流和合作,在中国计算机学院软件工程专委会(软件测试学组)的支持下,在北京航空航天大学举办一次为期一天半(4月14日全天,4月15日半天)的测试研讨会(STAP 2011,Software Testing Automation and Practices 2011)。本次研讨会共邀请到了来自国内外的13个精彩报告。从不同的角度阐述测试自动化方法和技术,以及在大型测试工程中的应用。

主题报告

Test Case Selection Strategies for Model-Based Testing: Search-based Approaches and Industrial Case Studies

Lionel C. Briand, Simula, Norway

Abstract: Systems in all industry sectors increasingly rely on software for critical and complex functions. Software dependability must be ensured through verification and one of the most widespread and practical verification techniques is testing, that is the systematic and controlled execution of the system being verified. In recent years, Model-Based Testing (MBT) has attracted an increasingly wide interest from industry and academia. MBT allows automatic generation of a large and comprehensive set of test cases from system models (e.g., state machines), which leads to systematic system testing. However, even when using simple test strategies, applying MBT in large industrial systems often leads to generating large sets of test cases that cannot possibly be executed within time and cost constraints. In this situation, test case selection techniques must be employed to select a subset from the entire test suite such that the selected subset conforms to available resources while maximizing fault detection. In this talk, I will present the results of a comprehensive investigation involving alternative selection strategies, that are based on various heuristics and algorithms, and that attempt to maximize diversity or coverage in test suites. Based on an industrial case study, we will also estimate the potential benefits that can result from such test case selection strategies.

 

 

 

Automated Transition from Use Cases to UML State Machines to Support State-based Testing

Tao Yue ([email protected]), Simula, Norway

Abstract: Use cases are commonly used to structure and document requirements while UML state machine diagrams often describe the behavior of a system and serve as a basis to automate test case generation in many model-based testing (MBT) tools. Therefore, automated support for the transition from use cases to state machines would provide significant, practical help for testing system requirements. Additionally, traceability could be established through automated transformations, which could then be used for instance to link requirements to design decisions and test cases, and assess the impact of requirements changes. In this paper, we propose an approach to automatically generate state machine diagrams from use cases while establishing traceability links. Our approach is implemented in a tool, which we used to perform three case studies, including an industrial case study. The results show that high quality state machine diagrams can be generated, which can be manually refined at reasonable cost to support MBT. Automatically generated state machines showed to largely conform to the actual system behavior as evaluated by a domain expert.

 

Test Data Generation for Unit Testing and Combinatorial Testing

Jian Zhang ([email protected]), Institute of Software, Chinese Academy of Science (ISCAS)

Abstract: I will talk about our work on the automatic generation of test data for the unit testing of C programs, and for combinatorial testing (by constructing small covering arrays and orthogonal arrays).

 

Testing Service-Centric Software

Lu Zhang ([email protected]), Beijing University (PKU)

Abstract: With the increasing popularity of Web services, using services to build software systems has become an important way of software development. However, due to the special characteristics of Web services, testing service-centric software is somewhat different from testing traditional software. In this talk, I present our research on two issues specific to testing service-centric software. The first issue is the unavailability of service source code. The second issue is the access quota of services imposed by service providers.

 

Dynamic Random Testing

Kai-Yuan Cai ([email protected]), Beihang University (BUAA)

Abstract: Random testing is popular in the area of software testing and often serves as a benchmark technique in comparison with other software testing techniques. Suppose that the input domain of the software under test is divided into   classes. Random testing often assumes that the underlying test profile is defined as, which may be a uniform or non-uniform probability distribution. That is, a test case is selected from class   with a constant probability   during software testing. In this paper, we follow the idea of software cybernetics and propose a simple technique of dynamic random testing. The test profile or the probability distribution   is dynamically updated during software testing. Case studies with three subject programs show that the proposed technique of dynamic random testing (DRT) is encouraging. In comparison with the random testing technique with uniform test profile, the DRT technique uses fewer test cases to detect the same number of defects and shows lower fluctuation in its performance over a number of iterations. In comparison with the Adaptive Testing (AT) techniques, the DRT technique uses slightly more test cases to detect the same number of defects. However, the computational overhead incurred for test case selection in the adaptive testing technique is almost avoided. The effects of a parameter   in the DRT algorithm on the testing performance are also investigated through a series of experiments and data indicate that has noticeable impact on the performance of the DRT technique and should be assigned appropriately.

 

An Efficient Schedule-based Active Testing for Multithreaded Java Programs

Jian-Jun Zhao ([email protected]), Shanghai Jiao Tong University (SJTU)

Abstract: Multithreaded programs are notorious due to the existence of non-deterministic thread interleavings. Especially, many concurrency bugs only occur under certain rarely-executed thread interleavings, thus it is extremely difficult to detect concurrency bugs effectively and efficiently from the huge state space of thread interleavings.

To solve this problem, we propose an efficient testing approach for multithreaded Java programs. The essential idea of our approach is to fuse static analysis and testing into active testing through schedules. On one hand, our approach utilizes static analysis to produce may-buggy schedules, thus it can follow the notion of runtime-based model checking that explores thread interleavings soundly. On the other hand, our approach also utilizes static analysis to prune and sort may-buggy schedules actively according to the feedback on alias pairs and paths after each testing, thus it can explore thread interleavings efficiently. We have also implemented this testing approach and applied it to detection of atomicity-related violations.

 

Model-driven testing and verification

Linzhang Wang ([email protected]), Nanjing University (NJU)

Abstract: ---to be provided

 

An empirical analysis of the relationships between structural metrics and unit testability in object-oriented systems

Yuming Zhou ([email protected]), Nanjing University (NJU)

Abstract: Previous research has focused on the theoretical relationships between structural metrics and unit testability of classes in object-oriented systems. However, few empirical studies have been conducted to examine the actual impact of structural metrics on the effort involved in unit testing classes. In this paper, we employ multiple linear regression (MLR) and partial least square regression (PLSR) to empirically investigate the relationships between, on the one hand, a large number of structural metrics, including size, cohesion, coupling, inheritance, complexity metrics, and, on the other, the effort involved in unit testing classes. Our results, derived from an open-source software system, Apache Ant 1.7.0, show that: (1) most structural metrics are statistically related to this effort in an expected direction, among which size, complexity, and coupling metrics are the most important predictors; that (2) multivariate regression models based on structural metrics cannot accurately predict the required effort, although they are better able to rank the testability of classes, especially when compared to the simple size and random models; that (3) multivariate regression models based on structural metrics are able to accurately predict the total effort involved in unit testing all classes in a system; that (4) structural metrics available at late stages of development can potentially produce significant improvements in prediction accuracy and class ranking capabilities; that (5) the transition from MLR to PLSR can significantly improve the ability to rank the testability of classes and to predict the total effort involved in unit testing all classes in a system but cannot improve the ability to predict the effort involved in unit testing individual classes.

 

Semantic-Based Modeling and Testing

Xiaoying Bai ([email protected]), Tsinghua University

Abstract: --to be provided

 

Distributed Test Automation and its Evolving

Ji Wu ([email protected]), Beihang University (BUAA)

Abstract: Test automation does not only save testing effort, but increase testing effectiveness. In sometimes, manual testing may fail to achieve given testing goals, particularly for network based applications testing. In this presentation, I will introduce our recent researches on distributed testing automation. The presentation will have three parts: (1) test modeling and script generation; (2) distributed test runtime with TTCN-3; (3) distributed testing service over Internet.

 

Automated Testing Practices in Software Testing Center (STC) of State Information Center

Li Gang ([email protected]), Chief Technical Manager, STC of State Information Center

Abstract: This presentation focus on how to apply automation techniques in practical testing projects. It will cover unit testing, functionality testing and performance testing.

 

Research and Practice on Automated Software Testing

Zhang Wei ([email protected]), Shan Dong Software Testing Center

Abstract: This report first describes the general process of software testing in our center, and then, gives two testing case studies from the bank and tax industries, which show how we manage and implement the practical test projects. Especially for automated testing, we analyze the problems and obtain some experience through the practices of function testing and performance testing. This may provide reference for research.

 

High-assurance Software and Government OA Testing Project Practice in BHSTEL

He Zhitao([email protected]), Beihang University(BUAA)

Abstract: The speech introduces the testing practice of BHSTEL in high-assurance software and government OA fields, focusing on testing methods and quality control system and recent research results on defects discovery model.

 

你可能感兴趣的:(测试,testing,generation,performance,random,transition)