Team LiB
Previous Section Next Section

Big Theta Notation

When most people talk about big-oh notation, they are more accurately referring to what Donald Knuth describes as big-theta notation. Technically, big-oh notation refers to an upper bound. For example, seven is an upper bound of six; so are 9, 12, and 65. Subsequently, when most people discuss function growth they talk about the least upper bound, or a function that models both the upper and lower bounds[1]. Professor Knuth, the father of the field of algorithmic analysis, describes this as big-theta notation and gives the following definition:

[1] If you're curious, the lower bound is modeled by big-omega notation. The definition is the same as big-o, except g(x) is always less than or equal to f(x), not greater than or equal to. Big-omega notation is less interesting than big-oh because finding functions smaller than your function is rarely interesting.

If f(x) is big-theta of g(x), then

g(x) is both an upper bound and a

lower bound for f(x).

Then, you can say that f(x) is of order g(x). The order, or big-theta, of an algorithm is one of the most important mathematical tools for understanding algorithms in the kernel.

Consequently, when people refer to big-o notation they are more often talking about the least such big-o, the big-theta. You really do not have to worry about this, unless you want to make Professor Knuth really happy.

    Team LiB
    Previous Section Next Section