To answer this question, we assume the unit cost RAM model, where the computational time is essentially the number of elementary arithmetic operations and the storage for any integer number takes a unit space. See Section 2.14.
There are two approaches to evaluate the complexity of a given convex hull algorithm.
Let be an algorithm which computes a minimal inequality description of a full-dimensional convex polytope for a given point set in with . Let denote the number of inequalities in the output .
(One can interprete the discussion here in dual setting: consider as an algorithm to compute all vertices of a convex polytope with inequaities with vertices.)
First of all, most people agree that the efficiency of computing the convex hull should be measured at least by the critical input parameters and . Some people like to see the complexity by fixing to constant, but it is always better to evaluate in terms of as well, and fix it later.
The first measure, often employed by computational geometers, is to bound the worst case running time of an algorithm for any input with points in . For example, if is of , then it means terminates in time for ANY input of points in dimension . Also, when one set to be fixed (constant), such an algorithm is said to have time complexity , since is simply a constant. We may call this worst-case-input measure. For fixed dimension, there is an optimum algorithm [Cha93] for the convex hull in terms of the worst-case-input measure, that runs in time for . It cannot be better because the largest output is of the same order by the upper bound theorem (Theorem 2).
The worst-case-input measure is quite popular, but it might be little misleading. For example, suppose algorithms and are of time complexity and , respectively. Then by this measurement, the algorithm is superior to .
Here is a potentially serious problem with this worst-case-input measure. Above, it is still possible that takes worst-case time for ALL input of points in , and takes time proportional to some polynomial function of . Note that the number of inequalities varies wildly from to , even for fixed (by the upper bound theorem Theorem 2 and (1)). This diversity is just too big to be ignored if . Furthermore, the input data leading to the worst-case output hardly occurs in practice. In fact, for the random spherical polytope, the expected size of is linear in , see Section 2.16. While the worst-case-input optimal algorithm [Cha93] is a remarkable theoretical achievement, we are still very far from knowing the best ways to compute the convex hull for general dimensions.
In order to circumvent this pitfall, one can use a measure using all key variables . Or more generally, one can measure the time complexity in terms of both the size of input and the size of output. We say an algorithm is polynomial if it runs in time bounded by a polynomial in . This polynomiality coincides with the usual polynomiality when the output size is polynomially bounded by the size of input.
Under the nondegeneracy assumption (see 2.12), there is a polynomial algorithm for the convex hull problem. Few of the earlier polynomial algorithms are pivot-based algorithms [CCH53,Dye83] solving the problem in dual form (the vertex enumeration problem) and a wrapping algorithm [CK70]. A more recent algorithm [AF92] based on reverse search technique [AF96] is not only polynomial but compact at the same time. Here, we say an algorithm is compact if its space complexity is polynomial in the input size only.
In the general case, there is no known polynomial algorithm. The paper [ABS97] is an excellet article presenting how various algorithms fail to be polynomial, through ingenious constructions of ``nasty'' polytopes.