fbpx

What is Sequential Sampling?

Sequential Sampling

Share This...

Sequential Sampling

Sequential sampling, also known as sequential analysis or sequential hypothesis testing, is a statistical method used for hypothesis testing that allows for a decision to be made as soon as sufficient evidence has been accumulated, without the need to specify the sample size in advance. This method was developed during World War II by Abraham Wald, primarily to detect manufacturing defects in munitions with as few tests as possible.

In traditional fixed-sample-size tests, the sample size nn is determined in advance and the test is conducted until all nn samples have been observed, after which a decision is made. In contrast, sequential sampling allows for continuous testing and making decisions at any point, which could be either to accept a hypothesis, reject it, or continue sampling.

Key Characteristics of Sequential Sampling:

  • Early Decision Making: If strong evidence is found early in the process, the test can be concluded without taking more samples.
  • No Preset Sample Size: The number of observations can vary from one test to another, depending on when a conclusive decision is reached.
  • Efficiency: Typically requires fewer observations on average compared to fixed-sample-size tests, especially when the true state of nature is far from the null hypothesis.
  • Boundaries: Decision boundaries are defined for accepting or rejecting the hypothesis. If the accumulated data crosses one of these boundaries, a decision is made. Otherwise, sampling continues.

Example of Sequential Sampling

A toy manufacturing company produces stuffed animals. The quality control department wants to ensure that no more than 2% of the stuffed animals are defective. They decide to use sequential sampling to determine if a batch meets the quality standard.

Procedure:

  • They set two boundaries for their test:
    • Acceptance boundary: If they find 0 defective toys in the first 50 tested, they’ll assume the batch is good and accept it.
    • Rejection boundary: If they find 3 defective toys before testing 50, they’ll assume the batch is bad and reject it.
  • They start testing the toys one by one.

Possible Outcomes:

  • Outcome 1: By the time they’ve tested 20 toys, 3 are found to be defective. This hits the rejection boundary. The quality control team stops testing further and rejects the batch, assuming it doesn’t meet the quality standards.
  • Outcome 2: They’ve tested 50 toys, and all are non-defective. This meets the acceptance boundary. The team stops testing and accepts the batch, assuming it meets the quality standards.
  • Outcome 3: They’ve tested 40 toys, with only 1 being defective. Neither boundary has been met, so they continue testing.

Results:

For Outcome 1, the team reached a decision after testing only 20 toys, which is more efficient than if they had decided to test a fixed number, say 100 toys.

For Outcome 2, they tested 50 toys, but the clear quality of the batch was evident, so no further testing was required.

Outcome 3 illustrates the scenario where the results hover between the boundaries, and more testing is required to reach a conclusion.

This example demonstrates the efficiency of sequential sampling. In cases where the batch quality (good or bad) is clear early on, it can save resources. However, in more ambiguous cases, the sampling might continue until a clear result emerges or until a predetermined maximum sample size is reached. The key is that the decision can be made as soon as enough evidence has been gathered, rather than waiting until a fixed number of samples have been tested.

Other Posts You'll Like...

Want to Pass as Fast as Possible?

(and avoid failing sections?)

Watch one of our free "Study Hacks" trainings for a free walkthrough of the SuperfastCPA study methods that have helped so many candidates pass their sections faster and avoid failing scores...