Write the easiest to understand thing that obviously does what it's supposed to do. If you write a tricky version of a function, because the easy one goes too slow or consumes too much memory when used at full scale, keep the one that you know works in the code too. Then you can switch between versions of the function or use them at the same time, to test whether it's really an improvement and whether it really does the same thing.

Only if you change the intended result of a function from what the easiest version produces should you delete it, and even then it might be better to update the easiest version instead, because it can also be a kind of documentation for instruction of maintainers (including your future self) and a double check for correctness (by using it in testing, and by the ideas that come up when looking at different ways of doing the same thing.)

For instance, I took a routine that was calculating the sine function repeatedly for a waveform and wrote an optimized version that just calculates one sample step then produces the rest by complex multiplication. Testing them running at the same time and subtracting the results from each other produced a sound that represented the inaccuracy of the optimization. Without that testing, I wouldn't have got the optimization right. Then after correcting the optimization, I multiplied the result of the subtraction by 10 trillion times and heard an interesting sound which was the inaccuracy of double precision floating point complex multiplication.

Only if you change the intended result of a function from what the easiest version produces should you delete it, and even then it might be better to update the easiest version instead, because it can also be a kind of documentation for instruction of maintainers (including your future self) and a double check for correctness (by using it in testing, and by the ideas that come up when looking at different ways of doing the same thing.)

For instance, I took a routine that was calculating the sine function repeatedly for a waveform and wrote an optimized version that just calculates one sample step then produces the rest by complex multiplication. Testing them running at the same time and subtracting the results from each other produced a sound that represented the inaccuracy of the optimization. Without that testing, I wouldn't have got the optimization right. Then after correcting the optimization, I multiplied the result of the subtraction by 10 trillion times and heard an interesting sound which was the inaccuracy of double precision floating point complex multiplication.