For me not addressing potential issues with duplicate content is a common error which leads to lots of duplicate URLs then having to be addressed.
It's more than using the canonical tag as well, it's about understanding things like session IDs and URL parameters and knowing which ones to block from being indexed vs. which ones are actually valid unique URLs that should be indexed to support SEO.
I guess it relates back to the core IA point.
What's your view on handling duplication (small subject!)?