I think a great many problems in American society can be traced to the inability of people to intuitively make trade-offs between Type-1 errors and Type-2 errors. For example, after September 11th, people demanded that no one ever be allowed to bring ANYTHING DANGEROUS onto a plane ever again, no matter the cost. People started saying things like "If it saves even one life, it's worth it." People thought that if we could just make security more strict we could prevent these problems. Eventually we realized that in our efforts to avoid missing real threats, we were simply categorizing everything as a threat, which is particularly unhelpful. The easiest (and probably only) way to prevent passengers on airplanes from posing a threat to anyone is to ban commercial air travel. Of course, as we were recently reminded, even the pilots can pose a threat.
Part of the problem with the TSA specifically is that it is an unaccountable bureaucracy which is simply incompetent and incapable of doing its job, but beyond that a mandate to eliminate all risk will always cause problems. By focusing only on not missing a threat, they ignore the very real costs of falsely detecting threats, and we can see this in many different facets of American life. Trying to prevent some real but small health risks has lead to pregnant women being told they can't eat or drink or do anything at all. Trying to prevent a handful of children from dying in hot cars has led to arresting parents who leave their kids in a perfectly comfortable car for 90 seconds, having their children taken by CPS, and generally ruining the lives of everyone involved (except whichever busybody called the police). Trying to prevent a very small number of police officers from being shot while interacting with suspects has lead to hundreds of innocent people being preventativly shot. Trying to prevent... something has led to the NSA collecting every phone call in the country.
Part of this is the law of diminishing returns, where eliminating the last 5% of risk can cost more than the first 95%, but that only becomes a problem with people are unwilling to admit those costs. When the people responsible for preventing Type-2 errors are unconcerned making Type-1 errors, you get a LOT of Type-1 errors. If you are a regulator who will lose her job if any new buildings have an electrical fire, preventing all new construction suddenly looks like a viable plan.
Of course these trade-offs are difficult, and reasonable people can disagree where lines should be drawn between missing real threats and flagging false threats. But when people focus their attention only on one side of the problem, when they only worry about catching all threats, they lose sight of the real risk of false positives.