Definition - What does Fuzz Testing mean?
Fuzz testing describes system testing processes that involve a randomized or distributed approach. IT professionals often use the term to talk about efforts to stress test applications by feeding random data into them in order to spot any errors or hang-ups that may occur. The idea behind fuzz testing is that software applications and systems can have a lot of different bugs or glitches related to data input.
Techopedia explains Fuzz Testing
For example, fuzz testing may include the input of different kinds of integers, character strings, floats and other variables which, if not entered correctly, may cause the software application to hang or crash. A common example is an integer field that is meant to accommodate a few specific numbers such as one through five, but where a user can enter any integer because of the generic setup of the input field or control. Entering a high value may cause an error or crash. In fuzz testing, developers experiment with inputting many different kinds of random responses, and then document any bugs that occur. In some cases, developers may use a tool called a fuzzer to inject random data.
The idea of fuzz testing is often attributed to University of Wisconsin professor Barton Miller and his work in 1989. Another way to understand fuzz testing is that in some ways, the term corresponds to the more general term fuzzy logic, a type of reasoning that suggests that distributed processes can help observers to spot a broader trend in data or systems. Some IT professionals also talk about fuzz security testing, where testers may experiment with different kinds of hacks in order to identify security gaps.