Clamp adjusted ndistinct to positive integer in estimate_hash_bucketsize().
This avoids a possible divide-by-zero in the following calculation, and rounding the number to an integer seems like saner behavior anyway. Assuming IEEE math, the division would yield +Infinity which would get replaced by 1.0 at the bottom of the function, so nothing really interesting would ensue; but avoiding divide-by-zero seems like a good idea on general principles. Per report from Piotr Stefaniak. No back-patch since this seems mostly cosmetic.
This commit is contained in:
parent
408f043853
commit
fa09f89351
@ -3541,7 +3541,10 @@ estimate_hash_bucketsize(PlannerInfo *root, Node *hashkey, double nbuckets)
|
||||
* selectivity of rel's restriction clauses that mention the target Var.
|
||||
*/
|
||||
if (vardata.rel)
|
||||
{
|
||||
ndistinct *= vardata.rel->rows / vardata.rel->tuples;
|
||||
ndistinct = clamp_row_est(ndistinct);
|
||||
}
|
||||
|
||||
/*
|
||||
* Initial estimate of bucketsize fraction is 1/nbuckets as long as the
|
||||
|
Loading…
x
Reference in New Issue
Block a user