As a computer programmer I like how things work logically. The machine does exactly what you tell it to do.
As a programmer who specializes in user interfaces I have found that the best practice is to assume the user will, any time they are able, give you inputs that break your code, so plan accordingly.
This is fundamentally different from taking machine inputs and handling errors that way. You can allow only "good" input and be pretty much ok. The white list approach, as it is called.
With UI work, especially when the user gets to define "good" not only do you have to worry about nefarious inputs, but also plain random "stuff". This often times means kissing your regex goodbye. It won't work.
So, combining user inputs with system generated things, such as labels or id's presents you, the developer, with interesting logical challenges that I did not have to even consider when dealing with machine to machine communications.
My rule of thumb is cascading errors. Where a user can input something that breaks your code, even with pretty resilient error handling that can catch it, if they are allowed to continue with their work the error will cascade through your system and cause havoc.