Does anybody here understand how can the Huffman's algorithm be implemented in linearithmic ( O(n*log(n)) ) time? The way I implemented it in the AEC programming language (you can see it live
here and you can see the code
here), it takes O(n^2) time.
Huffman's algorithm requires one to first count the frequencies of the characters and store them in an array of TreeNodes. I did that in O(n^2), but I guess it can be done in O(n*log(n)) by using a set and then converting that set to an array. But then it requires us to, in the worst case (all the characters are different and the string cannot be compressed), n times search for the node with the lowest frequency and the node with the second-lowest frequency in the array of TreeNodes. How can that possibly be done in any less than O(n^2) time? And it requires us to erase those minimal nodes from the array when we find them. Deleting an element from an array takes O(n) time. And if we do that n times, it takes O(n^2) time, right? So, how can Huffman's algorithm possibly be implemented in O(n*log(n)) time?