

- #Automation testing for y2k defect process manual#
- #Automation testing for y2k defect process software#
- #Automation testing for y2k defect process code#
In order to be able to automate a test, we have to know the expected outcome so that we can check for the valid or invalid outcome. Automated testing is not testing, it is checking of facts.
#Automation testing for y2k defect process manual#
There is a clear distinction between what a manual tester does and what an automated test checks. This is probably the worst answer I have heard in regards to why we automate a test. Furthermore, there are other reasons why automated tests fail to find defects. There is always more chance of finding bugs in new features than in existing functionality.
#Automation testing for y2k defect process code#
Automated tests generally check for any regression in the system after new code has been implemented.

This answer worries me sometimes as I have never seen any metrics that suggests there were more bugs found by automation than manual / exploratory testing. So, it is important to note that to save time from automated tests, it requires an initial increased effort in scripting the automated tests, making sure they are code reviewed, and that there are no hiccups in the execution of automated tests. However, for a brand new feature that has been developed, it could actually take longer to write automated scripts than to test the feature manually in the first instant. This answer is also true as you can spend valuable time doing interesting exploratory testing while the automated tests are running. But even without that you get lots of tests looking for things that rarely go wrong distracting you from testing the things that really matter.” Save Time At the most absurd level you have AssertionFreeTesting. The trouble is that high coverage numbers are too easy to reach with low quality testing. “If you make a certain level of coverage a target, people will try to attain it. In a post by Martin Fowler, where he discuses Test Coverage, he mentions However, in testing and especially test automation, more tests don’t really mean better quality or more chance of finding bugs. With a mature test automation practice in place, you could be running hundreds of tests in a relatively short period of time.īecause of this, we can create more test cases, more test scenarios and test with more input data for a given feature and thus gain more confidence that they system is working as expected. This answer is quite valid, but how do we define coverage? If we have 100 tests, how can we measure the percentage coverage? Some of the answers I get to the above question are: Increase Test Coverage

Some of the answers I get from candidates are quite credible, but still not the answer that I’m looking for. This is one of the questions I ask when I interview candidates for a Test Automation role and to my surprise, many candidates seem to miss the main and most important reason to automate a test. However, by leveraging a test automation tool, it is easier to write the test suite, re-play it as required, mitigating human intervention, and improving testing ROI. This process can be extremely repetitive and time-consuming if you perform it manually. If you are familiar with testing, you understand that successive development cycles require the execution of the same test suite repeatedly.
#Automation testing for y2k defect process software#
So, what is automation testing? On the contrary, it is the practice of running tests automatically, managing test data, and utilizing results to improve software quality. To put it simply, manual testing is a testing technique performed by human effort to ensure the software code does anything it is supposed to do. Both kinds aim to execute the test case, then compare the actual outcome with the expected result. In the software testing world, there are two types of testing techniques – manual and automated.
