Monday, 10 December 2012

Index For Algorithm Performance

Asymptotic notation is used to classify and rate algorithms. I have never learnt Algorithms as part of my academics. So, when I started learning the subject outside the hallowed walls of the academic world, the asymptotic notation was a hurdle in the quick learning of algorithms.

Yes, people, the asymptotic notation was the main culprit why it took me so long and why I felt very difficult to cross the waters of sorting, searching, graphs and reach data compression and natural language processing. Perhaps computer science students also have the same problem in college. Is there an easier and better way for rating algorithms?

Before that a brief look at the notations. As Sedgewick and Wayne explain in their book Algorithms, Fourth Edition [AFE], “In early days of computer science, D.E. Knuth postulated that, despite all of the complaining factors in understanding the running times of our programs, it is possible, in principle, to build a mathematical model to describe the running time of any program. Knuth’s basic insight is simple: the total running time of a program is determined by two primary factors:
The cost of executing each statement.
The frequency of executions of each statement.”

Donald Knuth borrowed Big-Oh, Big-Omega and Big-Theta notations from mathematics to define the so-called time complexity of algorithms. Big Oh, for heaven’s sake was the tool introduced by Paul Bachmann back in 1894 (gasp!!). It’s a convenient tool to express upper bounds on the growth rate of functions. How is a function defined? From Schaum's Outlines Discrete Mathematics by Seymour Lipschutz and Marc Lars Lipson, Third Edition, Special Indian Edition 2010 adapted by Varsha Patil : “Suppose that to each element of a set A we assign a unique element of a set B; the collection of such assignments is called a function from A into B”. One more new term in the mix, set. How is a set defined? "A set may be viewed as an unordered collection of distinct objects, called as elements or members of the set”. Okay, at this point the jargons end, so the definitions sequence can stop.

Pic courtesy: barbutti

I do think Knuth, along with the computer science fraternity, screwed up my happiness big time. Perhaps there was a time when algorithms were used and implemented with paper and pen. But for our generation, algorithms means the programming implementation of algorithms. Period. So why take recourse to functions? Even if one generation had done that, why do we have to persist with the approach?

I was glad when I realized that [AFE] gets away from these Big-Oh/Omega/Theta notations. Yes it does, and that’s cool. In the Q&A section of Chapter 1, the authors Sedgewick and Wayne explain : “The big-Oh notation is widely used. This notation is very useful in providing asymptotic upper bounds on the performance of algorithms, which is important in the theory of algorithms. But it is not useful for predicting performance or for comparing algorithms. The primary reason is that it describes only an upper bound on the running time. Actual performance may be much better. The running time of an algorithm might be both O(N2) and ~ a N log N. As a result, it cannot be used to justify tests like our doubling ratio test.”

So the big-Oh notation was not only a hurdle to learning to me, it is also not useful, as stated by Sedgewick and Wayne. Let's see what these authors have done. Their approach is simpler. Basically, it is to rate algorithms using a “cost model that defines the basic operations used by the algorithms”. Their “intent is to articulate cost models such that the order of growth of the running time for a given implementation is the same as the order of growth of the cost of the underlying algorithm.” Sedgewick & Wayne use the tilde notation (~) which “allows us to work with tilde approximations, where we throw away low-order terms that complicate formulas and represent a negligible contribution to values of interest.”

Before I read [AFE] I used to think and continue to do so now, that there ought to be a totally different method that drastically simplifies the teaching of algorithm performance. One idea is to give a rating to an algorithm with a number. Let’s call it the MH-index. So we should just say, as an example, the MH-index for radix sort is 2.4. Or the MH-index for depth first search is 3.2. Plain and simple. No higher order terms deletion or order of growth curves.

I don’t have a clue about how we will arrive at such a numeric index for rating algorithms. We can look at a couple of methodologies: One, processor benchmarking. From, we will know that ACTINA SOLAR 220 X3 (Intel Xeon X5650) has a SPECint2006 rating of 37.5. That’s it - a number for a processor. Analogously, we can think of a universal standard processor and a universal standard input size and run algorithms on that processor with that input size. Second, for grouping and display, I like the periodic table of chemical elements, which I learnt in school when I was a kid.

Come on, computer science folks - if humanity can put a number on beauty in pageants and rate different beauties with numbers, surely we can do the same for algorithms. Check it out yourself - in the Femina Miss India contest 1994, both Aishwarya Rai and Sushmita Sen got a score of 9.33 thus leading to a tie for the first time in the 29-year history of the contest. As per the contest rules, judges re-evaluated all the five finalists and during this round, Sushmita triumphed with a score of 9.41 over Aishwarya with 9.39. You can watch the video -> here

Growth curves are okay, as long as they are confined to models. For algorithms let’s just have an index. The possible benefits of such an index would be that big learning barriers for algorithms would be removed. It might encourage more people to take up computer science courses, equip them more quickly with algorithmic thinking and pave the way for plenty of innovation.

I am done with giving my concept and request - now some smart guys need to figure out how to make it a reality.

No comments:

Post a Comment