Survivorship bias is when you only pay attention to the data points that "survived" and ignore the ones that failed. A classic example is when comparing financial funds. Funds that lose money will eventually get shut down, boosting the average performance for funds that survive. You can be sure that the people who run financial firms are aware of this.
Apparently survivorship bias is a problem when comparing schools using test scores. If a school is particularly strict and has a high drop-out rate, it will tend to have higher scores than a more lenient school that gives students more chances after they screw up. This will be true whether or not being strict is a good policy for students.
We don't have to assume that schools are intentionally gaming test scores. If charter schools with higher test scores tend to survive then this will have an evolutionary effect encouraging strictness. But you can be sure that many school administrators are smart and, even if they didn't know the effect of strictness on test scores going in, they will eventually figure it out.
(Of course, people don't have to act on incentives, and we can praise schools that try to do the right thing in spite of them. But knowing that leniency is against your interests probably doesn't help, particularly when stricter schools are getting a lot of positive publicity.)
The statistical solution is to make sure that dropouts are counted when evaluating performance. We might even consider taking the school's admission policies into account, since not letting poor-performing students into the school in the first place is another way to increase test scores.
But if schools can choose how to present statistics about test scores when promoting themselves and we naively believe what we read in the news, these adjustments won't be made. Furthermore, this will tend to convince parents that stricter schools are better for their children, whether or not it's really true.
Something to remember when thinking about how incentives work.