Pages: << 1 ... 14 15 16 17 ...18 ...19 20 21 ...22 ...23 24 25 >>
One of the most fundamental reasons why organizations do not always get the most from their investment in test automation is that they lack experience in choosing what and what not to automate. Most software applications do, in fact, provide plenty of good opportunities to increase productivity and improve reliability of testing through the effective use of automated testing tools. Determining which tests to automate might be the most important factor in the success or failure of your test automation efforts.
Let's look at a few criteria that help to distinguish good from poor candidates:
This first recommendation might seem like common sense: choose tests which are short and relatively free of complexity. Especially when new to test automation, select tests that involve only a few pages or screens in your application. Avoid long, complex transactions and tests that involve more than one software application. Once you have built up a library of reusable test steps, you can then think about combining them to form more advanced tests.
Choose tests that have many data permutations. Almost every application has processing paths that need to be traversed with lots of different combinations of data. By creating a reusable test that separates actions from data, you can get a good return on your time investment by passing many sets of data to your data-driven test. For the amount of work that it takes to create one automated tests, you can cover many different test requirements.
Some tests include verification steps that change frequently. Until you develop the skills to automate the generation of expected results at runtime, stick to tests where the expected results are stable. It's worth the time and effort to manage the test data environment before you begin developing automated tests so that you can be sure that results are predictable and stable.