A source of SQL Server frustration for me has been
getting SQL Server to use any kind vaguely good query
plan for not-all-that-complex queries (3-5 tables
joined), in which several of the tables have millions of
records. For some reason, for our particular tables
(which have indexes in all the right places), SQL Server
really likes to do 10-million-row table scans. It's
often required some query rearrangement (and someimes
even a much-maligned index hint) to get good results.
I wish there was an option to tell SQL Server
"No matter what, never table scan a table with
more than a million rows; if you come up with a plan
that does that, punt and throw an error message instead,
so a DBA can fix it, rather than pound on a production
database for 10 minutes straight"
more links about SQL optimalization
Kyle,
Joins are also extremely expensive. I don't know what
you are doing the joins for (are they lookup tables?),
but sometimes it is easier to do several SELECTs from a
single table and then combine the results together.
Would ISNULL be any faster than COALESCE?
I've been finding weird behaviour around isnull() and
coalesce(), in terms of query optimization.
In one case, replacing all my coalesce() calls with
isnull() dropped a 2hr+ query down to 10secs. However,
in another case, it had the opposite effect.
And it does look like SQL server treats the two
differently, almost as if isnull() is more of a builtin
function.
> I've been finding weird behaviour around
isnull() and coalesce(), in terms of query optimization
Me too. I've got a query that with coalesce returns 42
rows, and with isnull instead returns 38 rows. Even
though as far as I can determine the return values are
the same.