What is your forensic examiner's error rate?
Document examiners must understand the law pertaining to providing expert testimony. In 1923, Frye v. United States, 293 F. 1013 (D.C. Cir. 1923), developed law as to the admissibility of expert testimony. The Frye case establishes that experts must use generally accepted practices in the industry when performing examinations. The 1993, Daubert v. Merrell Dow Pharmaceuticals (92-102), 509 U.S. 579 (1993) changed the requirements for expert testimony in federal court in Rule 702 of the Federal Rules of Evidence. Many states have adopted the Daubert approach, which makes the judge the gatekeeper who decides whether an expert is permitted to testify. The intent is to keep “junk science” out of the courtroom. Daubert established a test establishing the validity of the methodology used:
1. Empirical testing: The theory or technique must be falsifiable, refutable, and testable.
2. Subjected to peer review and publication.
3. Known or potential error rate for the methodology used
4. The existence and maintenance of standards and controls concerning the operation of the methodology
5. Degree to which the theory and technique is generally accepted by a relevant scientific community
In United States v. Starzecpyzel, 93 Cr 553 (LMM), 880 Fed.Sup. 1027 (S Dist N.Y. 1995), the court determined document examiners are “skilled experts” rather than scientists. As a result, the Daubert standard was not applied to forensic document examiners.
In 1997 in the Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999), the United States Supreme Court applied the Daubert standard to all expert testimony, not just testimony from scientists. Therefore, the Daubert tests apply to forensic document examiners.
When asked, “What is the error rate of a methodology?” a document examiner must first know the definition of error rate. On what criteria is the question based? Controlled experiments such as proficiency tests and academic studies have reported results.
Proficiency tests are not scientifically based since the subjects are self-selected rather than randomized subjects. The tests are not necessarily representative of the cases a document examiner receives in normal practice.
Academic experiments may be more properly controlled than proficiency tests. Yet a question exists as to the selection process for the subjects. Are the subjects volunteers? What is the criteria for selecting the subjects from the set of volunteers?
In either setting a question arises as to the methodology used to solve the test cases. Unless a scripted methodology is used, the results may not speak to the reliability of any methodology as different subjects may apply individualized methodologies to solving the problem.
In daily case work the document examiner rarely learns the outcome of a case. Even when the outcome of a case is known, knowledge of the truth of the opinion is even less frequently known.
As to the definition of error, if a document examiner opines, “The writer of the known documents probably wrote the questioned document” and subsequently the subject confesses, “I wrote the questioned document,” did the document examiner err? The proper opinion would have been, “The writer of the known documents is identified as the writer of the questioned document.”
Until the word error is properly defined and knowledge of the true state of nature for the questioned document is known, error rate cannot be properly stated.