OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom lin
e
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment