Random commentary about Machine Learning, BigData, Spark, Deep Learning, C++, STL, Boost, Perl, Python, Algorithms, Problem Solving and Web Search
Monday, June 28, 2010
Quicksort is O(n log n) in average. How about worst case?
Surprisingly enough, there are still people who are not aware that Quicksort is O(n log n ) in average. How can we make it O(n log n) in the worst case?
OK. Randomized Quick Sort provides you an expected case runtime of O(nlgn). The reason why this is expected as against worst case is that there is always a possibility of picking up a bad pivot regardless of how random your selection is.
Thinking is if I can guarantee that my pivot selection will never be the bad one, I can guarantee worst case run time of O(nlgn) for Quick Sort. How can I guarantee that my pivot selection will never be a bad one? Thinking is using something like the "Median of Medians algorithm" to pick the pivot.
Use IntroSort (http://en.wikipedia.org/wiki/Introsort)
ReplyDeleteIntrosort - http://en.wikipedia.org/wiki/Introsort
ReplyDeleteand without going in that direction?
ReplyDelete3-way partitioning?
ReplyDeleteDoes parallelizing the operations count as making it better ?
ReplyDeleteno. seletion counts
ReplyDeleteOK. Randomized Quick Sort provides you an expected case runtime of O(nlgn). The reason why this is expected as against worst case is that there is always a possibility of picking up a bad pivot regardless of how random your selection is.
ReplyDeleteThinking is if I can guarantee that my pivot selection will never be the bad one, I can guarantee worst case run time of O(nlgn) for Quick Sort. How can I guarantee that my pivot selection will never be a bad one? Thinking is using something like the "Median of Medians algorithm" to pick the pivot.
Would the above be a valid answer?
i like the second one.
ReplyDelete