Absolute Bounds on Set Intersection and Union Sizes from Distribution Information

A catalog of quick closed-form bounds on set intersection and union sizes is presented; they can be expressed as rules, and managed by a rule-based system architecture. These methods use a variety of statistics precomputed on the data, and exploit homomorphisms (onto mappings) of the data items onto distributions that can be more easily analyzed. The methods can be used anytime, but tend to work best when there are strong or complex correlations in the data. This circumstance is poorly handled by the standard independence-assumption and distributional-assumption estimates. >


Previous work
Analysis of the sizes of intersections is one of several critical issues in optimizing database query performance; it is also important in optimizing execution of logic-programming languages like Prolog. The emphasis in previous research on this subject has been almost entirely on developing estimates, not bounds. Various independence and uniformity assumptions have been suggested (e.g., [4] and [11]). These methods work well for data that has no or minor correlations between attributes and between sets intersected, and where bounds are not needed.
Christodoulakis [2] (work extending [9]) has estimated sizes of intersections and unions where correlations are well modeled probabilistically. He uses a multivariate probability distribution to represent the space of possible combinations of the attributes, each dimension corresponding to a set being intersected and the attribute defining it. The size of the intersection is then the number of points in a hyperrectangular region of the distribution. This approach works well for data that has a few simple but possibly strong correlations between attributes or between sets intersected, and where bounds are not needed. Its main disadvantages are (1) it requires extensive study of the data beforehand to estimate parameters of the multivariable distributions (and the distributions can change with time and later become invalid), (2) it only exploits count statistics (what we call level 1 and level 5 information in section 4), and (3) it only works for databases without too many correlations between entities.
Similar work is that of [7]. They model the data by coefficients equivalent to moments. They do not use multivariate http://faculty.nps.edu/ncrowe/intersect2.htm distributions explicitly, but use the independence assumption whenever they can. Otherwise they partition the database along various attribute ranges (into what they call "betas", what [5] calls "1-sets", and what [12] calls "first-order sets") and model the univariate distributions on every attribute. This approach does allow modeling of arbitrary correlations in the data, both positive and negative, but requires potentially enormous space in its reduction of everything to univariate distributions. It can also be very wasteful of space, since it is hard to give different correlation phenomena different granularities of description. Again, the method exploits only count statistics and only gives estimates, not bounds.
Some relevant work involving bounds on set sizes is that of [8], which springs from a quite different motivation that ours (handling of incomplete information in a database system), and again only uses count statistics. [10] investigates bounds on the sizes of partitions of a single numeric attribute using prior distribution information, but does not consider the much more important case of multiple attributes.
There has also been relevant work over the years on probabilistic inequalities [1]. We can divide counts by the size of the database to turn them into probabilities on a finite universe, and apply some of these mathematical results. However, the first and second objections of section 1 apply to this work: it usually makes detailed distributional assumptions, and is mathematically complex. For practical database situations we need something more general-purpose and simpler.

The general method
We present two main approaches to calculation of absolute bounds on intersection and union sizes in this paper.
Suppose we have a census database on which we have tabulated statistics of state, age, and income. Suppose we wish an upper bound on the number of residents of Iowa that are between the ages of 30 and 34 inclusive, when all we know are statistics on Iowa residents and statistics on people age 30-34 separately. One upper bound would be the frequency of the mode (most common) state for people age 30-34. Another would be five times the frequency of the most common age for people living in Iowa (since there are five ages in the range 30-34). These are examples of frequency-distribution bounds (discussed in section 4), to which we devote primary attention in this paper.
Suppose we also have income information in our database, and suppose the question is to find the number of Iowans who earned over 100,000 dollars last year. Even though the question has nothing to do with ages, we may be able to use age data to answer this question. We obtain the maximum and minimum statistics on the age attribute of the set of Americans who earned over 100,000 dollars (combining several subranges of earnings to get this if necessary), and then find out the number of Americans that lie in that age range, and that is an upper bound. We can also use the methods of the preceding paragraph to find the number of Iowans lying in that age range. This is an example of range-restriction bounds (discussed in section 6).
Our basic method for both kinds of bounds is quite simple. Before querying any set sizes, preprocess the data: (1) Group the data items into categories. The categories may be arbitrary.
(2)Count the number of items in each category, and store statistics characterizing (in some way) these counts. Now when bounds on a set intersection or union are needed: (3) Look up the statistics relevant to all the sets mentioned in the query, to bound certain subset counts.
(4) Find the minima (for intersections) or maxima (for unions) of the corresponding counts for each subset.
(5) Sum up the minima (or maxima) to get an overall bound on the intersection size.
All our rules for bounds on sizes of set intersections will be expressed as hierarchy of different "levels" of statistics knowledge about the data. Lower levels mean less prior knowledge, but generally poorer bounding performance.
The word "value" may be interpreted as any equivalence class of data attribute values. This means that prior counts on different equivalence classes may be used to get different bounds on the same intersection size, and the best one taken, though we do not include this explicitly in our formulae. http://faculty.nps.edu/ncrowe/intersect2.htm

Frequency-distribution bounds
We now examine bounds derived from knowledge (partial or complete) of frequency distributions of attributes.

Level 1: set sizes of intersected sets only
If we know the sizes of the sets being intersected, an upper bound ("sup") on the size of the intersection is obviously where n(i) is the size of the ith set and s is the number of sets.

Level 2a: mode frequencies and numbers of distinct values
Suppose we know the mode (most common) frequency m(i,j) and number of distinct values d(i,j) for some attribute j for each set i of s total. Then an upper bound on the size of the intersection is . To prove this: (1) an upper bound on the mode frequency of the intersection is the minimum of the mode frequencies; (2) an upper bound on the number of distinct values of the intersection is the minimum of the number for each set; (3) an upper bound on the size of a set is the product of its mode frequency and number of distinct values; and (4) an upper bound on the product of two nonnegative uncertain quantities is the product of their upper bounds.
If we know information about more than one attribute of the data, we can take the minimum of the upper bound computations on each attribute. Letting r be the number of attributes we know these statistics about, the revised bound is . A special case occurs when one set being intersected has only one possible value on a given attribute--that is, the number of distinct values is 1. This condition can arise when a set is defined as a partition of the values on that attribute, but also can occur accidentally, particularly when the set concerned is small. Hence the bound is the first of the inner minima, or the minimum of the mode frequencies on that attribute. For example, an upper bound on the number of American tankers is the mode frequency of tankers with respect to the nationality attribute.
The second special case is the other extreme, when one set being intersected has all different values for some attribute, or a mode frequency of 1. This arises from what we call an "extensional key" ( [12], ch. 3) situation, where some attribute functions like a key to a relation but only in a particular database state. Hence the first bound is the minimum of the number of distinct values on that attribute. For example, an upper bound on the number of American tankers in Naples, when we happen to know Naples requires only one ship per nationality at a time, is the number of different nationalities for tankers at Naples.

Level 2b: a different bound with the same information
A different line of reasoning leads to a different bound utilizing mode frequency and number of distinct values, an "additive" bound instead of the "multiplicative" one above. Consider the mode on some attribute as partitioning a set into two pieces, those items having the mode value of the attribute, and those not. Then a bound on the size of the intersection of r sets is .
To prove this, let be the everything in set i except for its mode, and consider three cases. Case 1: assume the set i that satisfies the first inner min above also satisfies the second inner min. Then the expression in brackets is just the size of this set. But if such a set has minimum mode frequency and minimum-size , it must be the smallest set. Therefore its size must be an upper bound on the size of the intersection. Case 2: assume set i satisfies the first inner min, some other set j satisfies the second inner min, and sets i and j have the same mode (most common value). We need only consider these two sets, because an upper bound on their intersection size is an upper bound on the intersection of any group of sets containing them. Then the minimum of the two mode frequencies is an upper bound on the mode frequency of the intersection, and the minima of the sizes of and is an upper bound on the R for the intersection. Thus the sum of two minima on s is a minimum on s. Case 3: assume set i satisfies the first inner min, set j satisfies the second inner min, and i and j have different modes. Let the mode frequency of i be a and that of j be d; suppose the mode of i has frequency e in set j, and suppose the rest of j (besides the d+e) has total size f. Furthermore suppose that the mode of j has frequency b in set i, and the rest of i (besides the a+b) has total count c. Then the 2b bound above is a+e+f. But in the actual intersection of the two sets, a would match with e, b with d, and c with f, giving an upper bound of min(a,e)+min(b,d)+min(c,f). But , and lastly because . Hence our 2b bound is an upper bound on the actual intersection size.
But the above bound doesn't use the information about the number of distinct values. If the set i that minimizes the last minima in the formula above contains more than the minimum of the number of distinct values d(i,j) over all the sets, we must "subtract out" the excess, assuming conservatively that the extra values occur only once in set i: It would seem that we could do better by subtracting out the minimum mode frequency the sets a number of times corresponding to the minima of the number of distinct values over all the sets. However, this reduces to the level 2a bound.

Level 2c: Diophantine inferences from sums
A different kind of information about a distribution is sometimes useful when the attribute is numeric: its sum and other moments on the attribute for the set.

Level 3a: other piecemeal frequency distribution information
The level 2 approach will not work well for sets and attributes that have relatively large mode frequencies. We could get a better (i.e. lower) upper bound if we knew the frequencies of other values than the mode. Letting m2(i,j) represent the frequency of the second most common value of the ith set on the jth attribute, a bound is: For this we can prove by contradiction that the frequency of the second most common value of the intersection cannot occur more than the minimum of the frequencies of the second most common values of those sets. Let M be the mode frequency of the intersection and let M2 be the frequency of the second most common value in the intersection. Assume M2 is more than http://faculty.nps.edu/ncrowe/intersect2.htm the frequency of the second most common value in some set i. Then M2 must correspond to the mode frequency of that set i. But then the mode frequency of the intersection must be less than or equal to the frequency of the second most frequent value in set i, which is a contradiction.
For knowledge of the frequency of the median-frequency value (call it mf(i,j)), we can just divide the outer minimum into two parts (assuming the median frequency for an odd number of frequencies is the higher of the two frequencies it theoretically falls between): The mean frequency is no use since this is always the set size divided by the number of distinct values.

Level 3b: a different bound using the same information
In the same way that level 2b complements level 2a, there is a 3b upper bound that complements the preceding 3a bound: (Here we don't include the median frequency because an upper bound on this for an intersection is not the minimum of the median frequencies of the sets intersected.) The formula can be improved still further if we know the frequency of the least common value on set i, and it is greater than 1: just multiply the maximum of (d(i,j)-d(k,j)) above by this least frequency for i before taking the minimum.

Level 4a: full frequency distribution information
An obvious extension is to knowledge of the full frequency distribution (histogram) for an attribute for each set, but not which value has which frequency. By similar reasoning to the last section the bound is: where freq(i,j,k) is the frequency of the kth most frequent value of the ith set on the jth attribute. This follows from recursive application of the first formula for a level-2b bound. First we decompose the sets into two subsets each, for the mode and nonmode items; then we decompose the non-mode subsets into two subsets each, for .I their mode and non-mode items; and so on until the frequency distributions are exhausted.
We can still use this formula if all we know is an upper bound on the actual distribution--we just get a weaker bound. Thus there are many gradations between level 3 and level 4a. This is useful because a classical probability distribution (like a normal curve) that lies entirely above the actual frequency distribution can be specified with just a few parameters and thus be stored in very little space.
As an example, suppose we have two sets characterized by two exponential distributions of numbers between 0 and 2.
Suppose we can upper-bound the first distribution by and the second by , so there are about 86 of each set. Then the distribution of the set intersection is bounded above by the minimum of those two distributions. So an upper bound on the size of the intersection is http://faculty.nps.edu/ncrowe/intersect2.htm

Level 4b: Diophantine inferences about values
A different kind of Diophantine inference than that discussed in 4.1.4 can arise when the data distribution is known for some numeric attribute. We may be able to use the sum statistic for set values on that attribute, plus other moments, to infer a list of the only possible values for each set being intersected; then the possible values for the intersection set must occur in every possibility list. We can use this to upper-bound size of the intersection as the product of an upper bound on the mode frequency of the intersection and the number of possible values of the intersection. To make this solution practical we require that (a) the number of distinct values in each set being intersected is small with respect to the size of the set, and (b) the least common divisor of the possible values be not too small (say less than .001) of the size of the largest possible value. Then we can write a linear Diophantine equation in unknowns which this time are the possible values, and solve for all possibilities. Again, see [13] for further details.

Level 5: tagged frequency distributions
Finally, the best kind of frequency-distribution information we could have about sets would specify exactly which values in each distribution have which frequencies. This gives an upper bound of: where gfreq(i,j,k) is the frequency of globally-numbered value k of attribute j for set i, which is zero when value k does not occur in set i, and where d(U,j) is the number of distinct values for attribute j in the data universe U.
All that is necessary to identify values is a unique code, not necessarily the actual value. Bit strings can be used together with an (unsorted) frequency distribution of the values that do occur at least once. Notice that level 5 information is analogous to level 1 information, as it represents sizes of particular subsets formed by intersecting each original set with the set of all items in the relation having a particular value for a particular attribute. This is what [12] calls "second-order sets" and [5] "2-sets". Thus we have come full circle, and there can be no "higher" levels than 5.

Lower bounds from frequency distributions
On occasion we can get nonzero lower bounds ("inf") on the size of a set intersection, when the size of the data universe U is known, and the sets being intersected are almost its size.

Lower bounds: levels 1 and 5
A set intersection is the same as the complement (with respect the universe) of the set union of the complements. An upper bound on the union of some sets is the sum of their set sizes. Hence a lower bound on the size of the intersection, when the universe U is size N, is which is the statistical form of the simplest case of the Bonferroni inequality. For most sets of interest to a database user this will be zero since the sum is at most sN. But with only two sets being intersected, or sets corresponding to weak restrictions (that is, sets including almost all the universe except for a few unusual items, sets intersected with others to get the effect of removing those items), a nonzero lower bound may more often occur. http://faculty.nps.edu/ncrowe/intersect2.htm For level 5 information the bound is: where gfreq(i,j,k) is as before the number of occurrences of the kth most common value of the jth attribute for the ith set, U is the universe set, and d(U,j) is the number of distinct values for attribute j among the items of U.

Lower bounds: levels 2, 3, and 4
It is more difficult to obtain nonzero lower bounds when statistical information is not tagged to specific sets, as for what we have called levels 2, 3, and 4. If we know the mode values as well as the mode frequencies, and the modes are all identical, we can bound the frequency of the mode in the intersection by the analogous formula to level 1 above, using the mode frequency of the universe (if the mode is identical) for N. Without mode values, we can infer that modes are identical for some large sets, whenever for each where m(i,j) is the mode frequency of set i on attribute j, m2(i,j) the frequency of the second most common value, n(i) the size of set i, and N the size of the data universe.
The problem for level 4 lower bounds is that we do not know which frequencies have which values. But if we have some computer time to spend, we can exhaustively consider combinatorial possibilities, excluding those impossible given the frequency distribution of the universe, and take as the lower bound the lowest level-5 bound. For instance, with an implementation of this method in Prolog, we considered a universe with four data values for some attribute, where the frequency distribution of the universe was (54, 53, 52, 51), and the frequency distributions of the two sets intersected were

Definitional sets
Another very different way of getting lower bounds is from knowledge of how the sets intersected were defined. If we know that set i was defined as all items having particular values for an attribute j, then in analyzing an intersection including set i, the "definitional" set i contributes no restrictions on attributes other than j and can be ignored. This is redundant information with levels 1 and 5, but it may help with the other levels. For instance, for i1 definitional on attribute j, a lower bound on the size of the intersection of sets i1 and i2 is the frequency of the least frequent value (the "antimode") of set i2 on j.

Better bounds from relaxation on sibling sets
Both upper and lower bounds can possibly be improved by relaxation among related sets in the manner of [3], work aimed at protection of data from statistical disclosure. This requires a good deal more computation time than the closed-form formulae in this paper and requires sophisticated algorithms. Thus we do not discuss it here.

Set unions
Rules analogous to those for intersection bounds can be obtained for union bounds. Most of these are lower bounds.

Defining unions from intersections
Since where means the size of the union of set i and set j, and http://faculty.nps.edu/ncrowe/intersect2.htm means the size of their intersection, extending our previous notation for set size, it follows that using the distribution of intersection over union, and Another approach to unions is to use complements of sets and DeMorgan's Law: The problem with using this is the computing of statistics on the complement of a set, something difficult for level 2, 3, and 4 information.
In one important situation the calculation of union sizes is particularly easy: when the two sets unioned are disjoint (that is, their intersection is empty). Then the size of the union is just the sum of the set sizes, by the first formula in this section. Disjointness can be known a priori, or we can infer it using methods in section 6.1.2.

Level 1 information for unions
To obtain union bounds rules from intersection rules, we can do a "compilation" of the above formulae (section 3.5.5 of [12] gives other examples of this process) by substituting rules for intersections in them, and simplifying the result. Substituting the level 1 intersection bounds in the above set-complement formula: Here we use the standard notation of "inf" for the lower bound and "sup" for the upper bound.

Level 2b unions
If we know the mode frequency m(i,j) and the number of distinct values d(i,j) on attribute j, then we can use a formula analogous to the level 2b intersection upper bound, a lower bound on the union:

Level 2a unions
The approach used in level 2a for intersections is difficult to use here. We cannot use the negation formula to relate unions to intersections because there is no comparable multiplication of two quantities (like mode frequency and number of distinct values) that gives a lower bound on something. However, for two sets we can use the other (first) formula relating unions to intersections, to get a union lower bound: http://faculty.nps.edu/ncrowe/intersect2.htm For three sets, it becomes: The formulae get messy for more sets.

Level 3b unions
Analogous to level 2b, we have the lower bound where m2(i,j) is the frequency of the second most common value of set i on attribute j. And if we know the frequency of the least common value in set i, we multiply the maximum of (d(k,j)-d(i,j)) above by it before taking the maximum.

Level 3a unions
Analogous to level 2a, and to level 3a intersections, we have for the union of two sets a lower bound of: where m2 is the frequency of the second most common value, and mf the frequency of the median-frequency value.

Level 4 unions
The analysis of level 4 is analogous to that of level 4 above, giving a lower bound of where freq(i,j,k) is the frequency of the kth most frequent value of the ith set unioned on the jth attribute.

Complements
To complete our coverage of set algebra we need set complements. The size of a complement is just the difference of the size N of the universe U (something that is often important, so we ought to know it) and the size of the set. An upper bound on a complement is N minus a lower bound on the size of the set; a lower bound on a complement is N minus an upper bound on the size of the set.

Embedded set expressions
So far we have only considered intersections, unions, and complements of simple sets about which we know exact statistics. But if the set-description language permits arbitrary embedding of query expressions, new complexities arise.
One problem is that the formulae of sections 4.1-4.4 require exact values for statistics, and such statistics are usually impossible for an embedded expression. But we can substitute upper bounds on the embedded-expression statistics in upperbound formulae (or lower bounds when preceded in the formula by a minus sign). Similarly, we can substitute lower bounds on the statistics in lower-bound formulae (or upper bounds when preceded in the formula by a minus sign). This works for statistics on counts, mode frequency, frequency of the second-most common value, and number of distinct items--but not the median frequency.

Summary of equivalences
Another problem is that there can be many equivalent forms of a Boolean-algebra expression, and we have to be careful which equivalent form we choose because different forms give different bounds. Appendix A surveys the effect of various equivalences of Boolean algebra on bounds using level 1 information. Commutativity and associativity do not affect bounds, but factoring out of common sets in conjuncts or disjuncts with distributive laws is important since it usually gives better bounds and cannot worsen them. Factoring out enables other simplification laws which usually give better bounds too.
The formal summary of Appendix A is in Figure 1 ("yes" means better in all but trivial cases). Since these transformations are sufficient to derive set expression equivalent to another a set expression, the information in the table is sufficient to determine whenever one expression is always better than another.

The best form of a given set expression, for level 1 information
So the best form for the best level 1 bounds is a highly factored form, quite different from a disjunctive normal form or a conjunctive normal form. The number of Boolean operators doesn't matter, more the number of sets they operate on, so we don't want the "minimum-gate" form important in classical Boolean optimization techniques like Karnaugh maps. So minimum-term form [6] seems to be closest to what we want; note that all the useful transformations in the above table http://faculty.nps.edu/ncrowe/intersect2.htm reduce the number of terms in an expression. Minimum-term form makes sense because multiple occurrences of the same term should be expected to cause suboptimal bounds arising from failure to exploit the perfect correlation of items in the occurrences. Unfortunately, the algorithms in [6] for transforming a Boolean expression to this form are considerably more complicated than the one to a minimum-gate form.
Minimum-term form is not unique. Consider these three equivalent expressions: These cannot be ranked in a fixed order, though they are all preferable (by their use of a distributive law) to the unfactored equivalent So we may need to compute bounds on each of several minimum-term forms, and take the best bounds. This situation should not arise very often, because users will query sets with few repeated mentions of the same set--parity queries are rarely needed.
Another problem with the minimum-term form is that it does not always give optimal bounds. For instance, let set A in the above be the union of two new sets D and E. Let the sizes of B, C, D, and E respectively be 10, 7, 7, and 8. Then the three factored forms give upper bounds respectively of min (15,17)+min(10,7) = 22, min(10,22) + min(15 ,7 ) = 17, and min(7,25) + min (15,10 ) = 17. But the first form is the minimum-term form, with 6 terms instead of 7. However, this situation only arises when there are different ways to factor, and can be forestalled by calculating a bound separately for the minimum-term form corresponding to every different way of factoring.

Embedded expression forms with other levels of information
Level 5 is analogous to level 1--it just represents a partition of all the sets being intersected into subsets of a particular range of values on a particular attribute, with bounds being summed up on all such ranges of values of the attribute. Thus the above "best" forms will be equally good for level 5 information. Analysis is considerably more complicated for levels 2, 3, and 4 since we do not have both upper and lower bounds in those cases. But the best forms for level 1 can be used heuristically then.

Analysis of storage requirements 4.7.1 Some formulae
Assume a universe of r attributes on N items, each attribute value requiring an average of w bits of storage. The database thus requires rNw bits of storage. Assume we only tabulate statistics on "1-sets" [5] or "first-order sets" [12] or universe partitions by the values of single attributes. Assume there are m approximately even partitions on each attribute. Then the space required for storage of statistics is as follows: Level 1: there are mr sets with just a set size tabulated for each. Each set size should average about N/m, and should require about bits, so a total of bits are required. This will tend to be considerably less than rNw, the size of the database, because w will likely be on the same order as , and m is considerably less than N. Level 3: we need twice as much space as level 2 to include the second highest frequency and the median frequency statistics too, hence bits.
Level 4: we can describe a distribution either implicitly (by a mathematical formula approximating it) or explicitly (by listing of values). For implicit storage, we need to specify a distribution function and absolute deviations above and below it (since the original distribution is discrete, it is usually easier to use the corresponding cumulative distributions). We can use codes for common distributions (like the uniform distribution, the exponential, and the Poisson), and we need a few distribution parameters of w bits, plus the positive and negative deviation extrema of w bits each too. So space will be similar to level 3 information.
If a distribution is not similar to any known distribution, we must represent it explicitly. Assume data items are aggregated into approximately equal-size groups of values; the m-fold partitioning that defined the original sets is probably good (else we would not have chosen it for the other purpose originally), so let us assume it. Then we have a total of . If some of the groups of values (bins) on a set are zero, we can of course omit them and save space.
Level 5: this information is similar to level 4 except that values are associated with points of the distribution. Implicit representation by good-fit curves requires just as much space as level-4 implicit representation--we just impose a fixed ordering of values along the horizontal axis instead of sorting by frequency. Explicit representation also takes the level 4 of but an alternative is to give pairs of values and their associated frequencies, something good when data values are few in number.
We also need storage for access structures. If users query only a few named sets, we can just store the names in a separate lexicon table mapping names to unique integer identifiers, requiring a total of bits for the table, where l is the average length of a name, assuming all statistics on the same set are stored together.
But if users want to query arbitrary value partitions of attributes, rather than about named sets, we must also store definitions of the sets about which we have tabulated statistics. For sets that are partitions of numeric attributes, the upper and lower limits of the subrange are sufficient, for 2mw bits each. But nonnumeric attributes are more trouble, because we usually have no alternative than to list the set to which each attribute value belongs. We can do this with a hash table on value, for bits assuming a 50% hash table occupancy. Thus total storage is approximately . A variety of compression techniques can be applied to storage of statistics, extending standard compression techniques for databases [15]. Thus the storage calculations above can be considered upper bounds.
These storage requirements are not necessarily bad, not even the level 4 and 5 explicit distributions. In many databases, storage is cheap. If a set intersection is often used, or a bound is needed to determine how to perform a large join when a wrong choice may mean hours or days more time, quick reasoning with a few page fetches of precomputed statistics (it's easy to group related precomputed statistics on the same page) will usually be much faster than computing the actual statistic or estimating it by unbiased sampling. That is because the number of page fetches is by far the major determinant of execution time for this kind of simple processing. Computing the actual statistic would require looking at every page containing items of the set; random sampling will require examining nearly as many pages, even if the sampling ratio is small, because except in the rare event in which the placement of records on pages is random (generally a poor database design strategy), records selected will tend to be the only records used on a page, and thus most of the page-fetch effort is "wasted." Reference [14] discusses these issues further. 5.1 Comparing bounds http://faculty.nps.edu/ncrowe/intersect2.htm subsets of the sets intersected.

Evaluation of the frequency-distribution bounds
6. Level 5 upper bounds are better than level 4a by the proof in Appendix B.
7. Level 5 lower bounds are better than level 1 lower bounds because level 5 partitions the level 1 sets into many subsets and computes lower bounds separately on each subset instead of all at once.
Analogous arguments hold for bounds on unions since rules for unions were created from rules for intersections.

Experiments
There are two rough guidelines for bounds on set intersection and union sizes to be more useful than estimates of those same things: 1. Some of the sets being intersected or unioned are significantly nonindependent (that is, not drawn randomly from some much larger population). Hence the usual estimates of their intersection size obtained from level 1 (size of the intersected sets) information will be poor.
2. At least one set being intersected or unioned has a significantly different frequency distribution from the others on at least one attribute. This requires that at least one set has values on an attribute that are not randomly drawn.
These criteria can be justified by the general homomorphism idea behind our approach (see section 3): good bounds result whenever values in the range of the homomorphism get very different counts mapped onto them for each set considered. These criteria can be used to decide which sets on a database it might be useful to store statistics for computing bounds.

Experiments: nonrandom sets
As a simple illustration, consider the experiments summarized in the tables of Figures 3 and 4. We created a synthetic database of 300 tuples of four attributes whose values were evenly distributed random digits 0-9. We wrote a routine (MIX) to generate random subsets of the data set satisfying the above two criteria, finding groups of subsets that had unusually many common values. We conducted 10 experiments each on random subsets of sizes 270, 180, 120, and 30. There were four parts to the experiment, each summarized in a separate table. In the top tables in Figures 3 and 4, we estimated the size of the intersection of two sets; in the lower tables, we estimated the size of the intersection of four sets. In Figure 3 the chosen sets had 95% of the same values; in Figure 4, 67%. http://faculty.nps.edu/ncrowe/intersect2.htm The entries in the tables represent means and standard deviations in 10 experiments of the ratios of bounds or estimates to the actual intersection size. There are four pairs of columns for the four different set sizes investigated. The rows correspond to the various frequency-distribution levels discussed: the five levels of upper bounds first, then two estimate methods, then the two lower bound methods. (Since level 5 information is just level 1 information at a finer level of detail, it is easier to generalize the level 1 estimate formula to a level 5 estimate formula.) Only level 2a and 3a rules were used, not 2b and 3b.
The advantage of bounds shows in both Figure 3 and Figure 4, but more dramatically in Figure 3 where sets have the 95% overlap. Unsurprisingly, lower bounds are most helpful for the large set sizes (left columns), whereas upper bounds are most helpful for the small set sizes (right columns). However, the lower bounds are not as useful because when they are close to the true set size (i.e. the ratio is near 1), estimates are also close. But when upper bounds are close to the true set size for small sets, both estimates and lower bounds can be far away.

Experiments: real data
The above experiments were with synthetic data, but we found similar phenomena with real-world data. A variety of experiments, summarized in [17], were done with data extracted from a database of medical (rheumatology) patient records. Performance of estimate methods vs. our bounding methods was studied for different attributes, different levels of information, and different granularities of statistical summarization. Results were consistent with the preceding ones for a variety of set types. This should not be surprising since our two criteria given previously are often fulfilled with medical data, where different measures (tests, observations, etc.) of the sickness of a patient often tend to correlate.

Bounds from range analysis
Frequency-distribution bounds are only one example of a class of bounding methods involving mappings (homomorphisms) of a set of data items onto a distribution. Another very important example are bounds obtained from analysis on the range of values for some attribute, call it j, of the data items for each set intersected. These methods essentially create new sets, defined as partitions on j, which contain the intersection or union being studied. These new sets can therefore can be included in the list of sets being intersected or unioned without affecting the result, and this can lead to tighter (better) bounds on the size of the result. Many formulas analogous to those of section 4 can be derived.

Statistics on partitions of an attribute
All the methods we will discuss require partition counts on some attribute j. That is, the number of data items lying in mutually exclusive and exhaustive ranges of possible values for j. For instance, we may know the number of people ages 0-9, 10-19, 20-29, etc.; or the number of people with incomes 0-9999, 10000-19999, 20000-29999, etc. We require that the attribute be sortable by something other than item frequency in order for this partitioning to make sense and be different from the frequency-distribution analysis just discussed; this means that most suitable attributes are numeric.
This should not be interpreted, however, as requiring anticipation of every partition of an attribute that a user might mention in a query, just a covering set. To get counts on arbitrary subsets of the ranges, inequalities of the Chebyshev type may be used when moments are known, as for instance Cantelli's inequalities: [probability that ] for µ the mean and ! the standard deviation of the attribute. Otherwise the count of a containing range partition may be used as an upper bound on the subset count.

Upper bounds from set ranges and bin counts on the universe (level 1)
Suppose we know partition (bin) counts on some numeric attribute j for the universe U. (We must know them for at least one set to apply these methods, so it might as well be the universe.) Suppose we know the maximum h(i,j) and minimum l(i,j) on http://faculty.nps.edu/ncrowe/intersect2.htm attribute j for each set i being intersected. Then an upper bound on the maximum of the intersection H(j), and a lower bound on the minimum of the intersection L(j) are Note if H(j)<L(j) we can immediately say the intersection is the empty set. Similarly, for the union of sets So an intersection or union must be a subset of U that has values bounded by L(j) and H(j) on attribute j, for any numeric attribute j. So an upper bound on the size of an intersection or union is the minimum-size such range-partition set over all attributes j in Q, or where s sets are intersected; where there are r numeric attributes; where B(x,j) denotes the number of the bin into which value x falls on attribute j; and where binfreq(U,j,k) is the number of items in partition (bin) k on attribute j for the universe U.
Absolute bounds on correlations between attributes may also be exploited. If two numeric attributes have a strong relationship to each other, we can formally characterize a mapping from one to the other with three items of information: the algebraic formula, an upper deviation from the fit to that formula for the universe U, and a lower deviation. We can calculate these three things for pairs of numeric attributes on U, and store only the information for pairs with strong correlations. To use correlations in finding upper bounds, for every attribute j we find L(j) and H(j) by the old method. Then, for every stored correlation from an arbitrary attribute j1 to an arbitrary attribute j2, we calculate the projection of the range of j1 (from L(j1) to H(j1)) by the formula onto j2. The overlap of this range on the original range of j2 (from L(j2) to H(j2)) is then the new range on j2, and L(j2) and H(j2) are updated if necessary. Applying these correlations requires iterative relaxation methods since narrowing of the range of one attribute may allow new and tighter narrowings of ranges of attributes to which that attribute correlates, and so on.

Upper bounds from mode frequencies on bin counts for intersected sets (level 2)
At the next level of information, analogous to level 2 for frequency-distribution bounds,

Upper bounds from bin counts for intersected sets (level 5)
Finally, if we know the actual distribution of bin counts for each set i being intersected, we can modify the intersection formula of level 1 as follows: http://faculty.nps.edu/ncrowe/intersect2.htm where s sets are intersected; where there are r numeric attributes; where B(x,j) is the number of the bin into which value x falls on attribute j; and where binfreq(i,j,k) is the number of items in partition (bin) k on attribute j for set i. Similarly, the union upper bound is: As with frequency-distribution level 4a and level 5 bounds, we can also use this formula when all we know is an upper bound on the bin counts, perhaps from a distribution fit.

Multidimensional intersection range analysis
Analogous to range analysis, we may be able to obtain a multivariate distribution that is an upper bound on the distribution of the data universe U over some set S of interest (as discussed in [9] and [2]). We determine ranges on each attribute of S by finding the overlap of the ranges for each set being intersected as before. This defines a hyperrectangular region in hyperspace, and the universe upper bound also bounds the number of items inside it. We can also use various multivariate generalizations of Chebyshev's inequality [1] to bound the number of items in the region from knowledge of moments of any set containing the intersection set (including the universe). As with univariate range analysis, we can exploit known correlations to further truncate the ranges on each attribute of S, obtaining a smaller hyperrectangular region.
Another class of correlation we can use is specific to multivariate ranges: those between attributes in the set S itself. For instance, a tight linear correlation between two numeric attributes j1 and j2, strongly limits the number of items within rectangles the regression line does not pass through. If we know absolute bounds on the regression fit we can infer zero items within whole subregions. If we know a standard error on the regression fit we can use Chebyshev's inequality and its relatives to bound how many items can lie certain distances from the regression line.
Just as for univariate range analysis, we can exploit more detailed information about the distributions of any attribute (not necessarily the ones in S). If we know an upper bound on bin size, for some partitioning into subregions or "bins", or if we know the exact distribution of bin sizes, we may be able to improve on the level 1 bounds.

Lower bounds from range analysis
Lower bounds can be obtained from substituting the above upper bounds in the first three formulae relating intersections and unions in section 4.4.1, either substituting for the intersection or for the union. Unfortunately the resulting formulae are complicated, so we won't give them here.

Embedded set expressions for range analysis
Let us consider the effect of Boolean equivalences on embedded set descriptions for the above range-analysis bounds, for level 1 information. First, range-analysis bounds cannot be provided for expressions with set complements in them, because there is no good way to determine a maximum or minimum of the complement of a set other than the maximum or minimum of the universe. So none of the equivalences involving complements apply.
The only set-dependent information in the level-1 calculation are the extrema of the range, H and L. Equivalence of set expressions under commutativity or associativity of terms in intersections or unions then follows from the commutativity of the maxima and minima of operations, as does distributivity of intersections over unions and vice versa. Equivalence under reflexivity follows because max(a,a) = a and min(a,a) = a. Introduction of terms for the universe and the null set are useless, because the max(a,0) = a for , and min(a,N) = a. So expression rearrangements do not affect the bounds, so we might as well not bother; that seems a useful heuristic for level 2 and 5 information too. http://faculty.nps.edu/ncrowe/intersect2.htm

Storage requirements for range analysis
Space requirements for these range analysis bounds can be computed in the same way as for the frequency-distribution bounds. Assume that the number of bins on each attribute is m, the average number of attributes is r, the number of bits required for each attribute value is w, and the number of items in the database is N. Then the space requirements for univariate range bounds are: Again, these are pessimistic estimates since they assume that all attributes can be helpful for range analysis.

Evaluation of the range-analysis bounds
Level 2 upper bounds are definitely better than level 1 because the binfreq(U,j,k) is an upper bound on mf(i,j); level 5 is better than level 2 because mf(i,j) is an upper bound on binfreq(i,j,k). But the average-case performance of the range-analysis bounds is harder to predict than that of the frequency-distribution bounds, since the former depends on widely-different data distributions, while the latter's distributions tend to be more similar. Furthermore, maxima and minima statistics have high variance for randomly distributed data, so it is hard to create an average-case situation for them; strong range-restriction effects do occur with real databases, but mostly with human-artifact data that does not fit well to classical distributions. Thus no useful average-case generalizations are possible about range-analysis bounds.

Cascading range-analysis and frequency-distribution methods
The above determination of the maximum and minimum of an intersection set on an attribute can be used to find better frequency-distribution bounds too, since it effectively adds new sets to the list of sets being intersected, sets defined as partitions of the values of particular attributes. These new sets may have unusual distributions on further attributes that can lead to tight frequency-distribution bounds.

Conclusion
We have provided a library of formulae for bounds on the sizes of intersections, unions, and complements of sets. We have emphasized intersections (because of their greater importance) and intersection upper bounds (because they are easier to obtain). Our methods exploit simple precomputed statistics (counts, frequencies, maxima and minima, and distribution fits) on sets. The more we precompute, the better our bounds can be. We illustrated by analysis and experiments the time-spaceaccuracy tradeoffs involved between different bounds. Our bounds tend to be most useful when there are strong or complex correlations between sets in an intersection or union, a situation in which estimation methods for set size tend to do poorly. This work thus nicely complements those methods. http://faculty.nps.edu/ncrowe/intersect2.htm the latter forms are obviously preferable.

A.8 Negation equivalences
We have not yet considered negation, but it causes few difficulties. First note so it is better to replace with U, and with . We can use this to show another form of absorption is desirable: DeMorgan's Laws always give two equivalent expressions:

Appendix B: proof of the superiority of level 5 frequency-distribution upper bounds to level 4a
For any attribute j, level 5 and level 4a upper-bound calculation can be expressed as operating on a matrix in which the entry in row i and column k represents the kth frequency for set i; this matrix has s rows and d(U,j) columns. But level 4a rows are sorted by decreasing values while level 5 rows are not. To show that level 5 bounds are superior (less than or equal to) level 4a bounds, we show that the level 4a matrix can be created by a series of binary interchanges on the level 5 matrix, where each interchange cannot improve the criterion for the matrix, the summation of the minima of the columns.
First we prove this for a two-row matrix. Suppose we first sort the columns by decreasing order of second-row frequencies. Now consider sorting the first-row frequencies by a d(U,j)-step process. For each step k, we pick the largest element in the first row exclusive of the first k-1 items, and interchange it with the element in column k. Suppose at some step we interchange a currently-largest value a with another value b, and suppose value a is originally in the same column with d in the second row, and b is originally in the same column with c in the second row. The only effect of this interchange is to substitute in the criterion an expression min(a,c)+min(b,d) for an expression min(a,d)+min(b,c), and assume and . We can verify the first expression is an upper bound on the second by considering the six cases in turn (see Figure 5). Thus the level-4 upper bound is itself an upper bound on the level-5 upper bound. http://faculty.nps.edu/ncrowe/intersect2.htm The result for a two-row matrix easily extends to matrices with more rows, if we just replace references to the values in the second row in the above by references to the minimum value in the column for all but the first-row value. Thus the general result is proved.