Technically, when you compress something up to a point of total randomness, you're not "purifying" the information by removing redundancy. Instead you're changing how information is represented. Effectively you move it from "storage" into decompression algorithm. Because only knowing this algorithm it is possible to restore original knowledge. And different algorithm will produce different knowledge with same "purified" data as an input.