This is Part 2 of my series, CPSIA By The Numbers. Part 1 is
here. This series is also being cross-posted to
Endangered Whimsy.
Searching for certainty is an entirely noble human endeavor. We long for the days when we were younger, when Mom or Dad could hold us in their arms and we would know we were safe. As we become adults, we venture out into the cold, uncertain world, and eventually we learn to live there by sticking to the course we feel most likely to bring a good outcome.
Groups like Consumer's Union seem to me to be taking an immature approach to the problem of lead in children's products. They wanted CPSIA because they believed it would make us safe from all lead, forever. Like little children, they wanted Mommy and Daddy Congress to make it all go away, and they are mad at that bad bad lady Nancy Nord for not doing what Mommy and Daddy said. If only Congress had that power. Congress has the power to make laws, but they do not have the power to make people 100% safe. Even if CPSIA is fully implemented, we will not be 100% safe from lead. Setting aside the fact that most lead exposure comes from lead in house paint, let's take a look at why this is.
We are going to use what nerds like me call a "
stochastic," or probability-based, approach. Probability is the most counter-intuitive branch of mathematics, so I'll do my best to explain this approach in layman's terms.
Suppose a clothing manufacturer, let's call him Ben, buys 10,000 metal snaps from a snap manufacturer, Jessica. Ben wants assurances that Jessica's snaps are CPSIA compliant to the 100ppm standard. So Jessica pulls out her XRF gun and tests 100 snaps (that's 1% of the snaps), and they all test around 60ppm, near but under the lead content limit.
Ben and Jessica now both believe all 10,000 snaps are compliant, but there's something they don't know. Due to random fluctuations in the lead content of the snaps, 200 of the 10,000 snaps exceed 100ppm. Neither Ben nor Jessica can know this, because they have no way to know without testing all the snaps. Now, if Jessica wants absolute certainty, she can test each and every one of the 10,000 snaps. But Jessica does not have time for this, and neither does Ben. Also, if Jessica were to task one of her employees to do this, it would raise the cost of the snaps so much that Ben could not afford them. So Ben and Jessica feel this is good enough.
Now Ben has used the snaps in his clothing line. He sends 25 garments that have 4 snaps each (total of 100 snaps) off to Jennifer Taggart of
The Smart Mama for testing in compliance with CPSIA. When he gets the results back, he finds that two of the snaps have failed.
Now Ben is in a deep fix. The failed snaps were on a size 4 green shirt and a size 10 blue shirt, but it would make no sense to pull all the size 4 green shirts and size 10 blue shirts, because the 200 defective snaps are now randomly distributed throughout his entire clothing line. Ben can have every one of the snaps tested and pull the garments that have failed snaps. Or he can pull the entire line, losing all the money he'd hoped to make from it. Or he can sell the clothing anyway and hope nobody notices a few defective snaps.
Here's what Ben and Jessica didn't know, but we can figure out:*
The original testing results were somewhat of a fluke. The chance that all 100 snaps in any given sample would pass, given the numbers we chose at the outset, was actually only about 13%. This means that in any other sample of 100 snaps, the chance is about 87% that at least one of them will fail. If Ben sells his clothing line and the CPSC comes by and randomly tests 100 of his snaps, he has an 87% chance of them finding a failed snap, forcing him to recall the entire line. If there were an 87% chance of rain today, you'd bring your umbrella. This makes selling the clothing line untenable, because CPSC audits are not random. Perhaps Jessica also supplies another clothing company that ended up doing a recall because of lead in the snaps, and that puts all the other companies Jessica supplies under the microscope.
So, having been burned, Ben and Jessica resolve to do something about the problem. Jessica can get more expensive, purer metal, so that there are only 10 defective snaps out of every 10,000. Jessica can also test more snaps. The more snaps Jessica tests, the greater the chances she'll find defective ones; but unless she tests them all, she has no way of knowing whether she's gotten them all-- and one might slip through and jeopardize Ben's clothing line again. In fact, even if Jessica is able to reduce the number of defective snaps to 10 out of 10,000 and tests 500 snaps from every 10,000, she still runs a 40% chance of finding a defective snap in her testing. Ben's safer now, but if he keeps his testing regimen unchanged, he still has a 10% chance of finding defective snaps in his clothing line. That's down from the 87% chance he had before, but it's not certainty. Ben's financial backers (maybe a bank, maybe his family) might not find palatable a 10% chance that the entire clothing line will be unsalable. That's like rolling a pair of dice and hoping it doesn't come up with a sum of 5.
It gets worse. Suppose Jessica gets really zealous and decides to not only improve the quality of her metal so that there are only 10 defectives in 10,000, but she also decides to test 1000 out of every 10,000 snaps. (Assume for the moment that the extra cost of all this doesn't bankrupt Jessica or turn off her customers.) Now it's Jessica that has the problem: every time she runs a test of 1000 snaps, she has a 63% chance of finding at least one defective snap. She can throw out the defective snaps as she finds them, and since we know there are only 10, eventually she'll get them all, though as a practical matter it'll be easier to just test every snap than to keep testing lots of 1000 snaps. But unless Jessica tests every snap, she will not know she has found them all. Remember,
Jessica does not know exactly how many defective snaps there are in each batch of 10,000. We know because in this example we're the omnipotent observer, and we set the conditions of the problem. But as a practical matter, we don't know how many defectives are out there in any given batch of anything!
Jessica
knows her snaps are (mostly) safe. She tests them over and over and gets pass, pass, pass, pass, pass. But Jessica
still cannot guarantee Ben that his product line won't be jeopardized by using her snaps, no matter how much she tests and tests and tests, unless she tests all the snaps one by one. Ben is in the same fix if Jessica cannot test all the snaps: he cannot guarantee that even if his lot of samples passes testing, that there are no defective snaps in the entire batch. The only way to guarantee it is for somebody to test all 10,000 of the snaps.
Bottom line: it is mathematically impossible to find all defective objects without going to the expense of testing them ALL. And that's assuming testing is 100% accurate, which it's not. And to add insult to injury,
the more zealously you test by sampling, the more confused you will be about the safety of your product. CPSIA was supposed to
reduce confusion about product safety, but now you have mathematical proof that it does exactly the opposite.
Now we apply our findings to the issue of protecting children from lead exposure.
In practice, Jessica can only test samples of her snaps. And CPSIA only requires Ben to test samples of his shirts. There is a probability, however small, that a defective snap will slip past both Jessica's AND Ben's testing, and be discovered by, say, a consumer group doing in-store testing as a public service. This will put both Ben and Jessica in a real fix. They both did their due diligence under CPSIA. Ben did all the required testing, and he vetted his supplier properly. Since suppliers aren't liable under CPSIA, Jessica went above and beyond the call of duty. And still a defective snap got through. And think about the retailer of Ben's clothing line. Ben provided a 100% accurate General Conformity Certificate-- it was based on tests run by a third party, which turned up no lead in the snaps. The retailer had every reason to believe that Ben's clothing line was perfectly safe-- and is now in jeopardy along with Ben.
Let's put some numbers to this. Suppose Jessica decides that testing samples of 200 snaps is adequate, and Ben still tests 100 snaps on the garments in his line; and let's suppose that Jessica used the purer metal so there were only 10 defective snaps in the lot of 10,000. The chance that both their tests detect no defective snaps is 74%, meaning that they stand about a 1 in 4 chance of actually, unknowingly passing a defective snap on to the public. If you were a retailer, would you want to take that chance? And that's just one clothing line. If the retailer carries 20 children's clothing lines similar to Ben's and they all have the same chance of having undetected lead in the snaps, it is very nearly certain that
something in the retailer's store is noncompliant. The retailer is going to be just fine-- unless, of course, somebody spreads rumors that their store is noncompliant. If the CPSC or the state Attorney General decides to check their store for noncompliant clothing instead of just checking their GCCs, they are so totally screwed.
Thus we see that CPSIA, strict and wasteful as it is, is still not capable of preventing lead exposure. And what's worse, it holds over each manufacturer's and retailer's head a significant chance that even if they do everything right, they can still be fined, or worse, jailed. People in business are used to taking risks, but that doesn't mean they're all willing to take the largest possible risks.
Sorry, Consumer's Union, your Mommy and Daddy are only human after all.
* Mathematics afficionados will recognize this as a binomial probability distribution with n=100 and p=.02, under the assumptions that defective snaps are indistinguishable from good snaps, and that XRF testing is perfect at detecting lead levels (it isn't; it does 95% confidence intervals, making these results even more uncertain).