Me again, with yet another question/rant about performance tuning.
Is there a CPU cost, as reported to profiler, when a read is physical
and not logical?
Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
guess), start QA, start profiler, run big, fat query, takes 75 seconds
(including several recompiles), including 200k reads and 42 seconds of
CPU. The difference in time is presumably (and logically, from a cold
start) physical disk I/O.
Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
is only 6 seconds.
Even if the compile-times all counted as CPU (does it?), that would
only be about 24 seconds, not enough to explain the earlier 42.
It sure seems like there is a correspondence between physical reads
and CPU time.
J.see inline
jxstern wrote:
> Me again, with yet another question/rant about performance tuning.
> Is there a CPU cost, as reported to profiler, when a read is physical
> and not logical?
The logical read is always there. The result of the physical read is
consumed through the data cache buffer.
My guess is that the CPU cost of a physical read depends on your I/O
subsystem.
> Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
> guess), start QA, start profiler, run big, fat query, takes 75 seconds
> (including several recompiles), including 200k reads and 42 seconds of
> CPU. The difference in time is presumably (and logically, from a cold
> start) physical disk I/O.
> Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
> is only 6 seconds.
> Even if the compile-times all counted as CPU (does it?), that would
> only be about 24 seconds, not enough to explain the earlier 42.
Let me get this straight. First run has an elapsed time of 75 seconds,
the second 26 seconds. Could be. Much of the data will be cached. If you
have a large resultset, then the network traffic would explain the
larger part of the remaining 26 seconds.
So the CPU cost seems to be 42 seconds for the first and 6 for the
seconds run. That probably means that the optimizer had a hard time
determining the best query plan. The second run it undoubtedly used the
cached query plan, leaving the other 5.9999 seconds to process the 200k
data from the buffer cache.
> It sure seems like there is a correspondence between physical reads
> and CPU time.
There might be, but I don't think this is the proper method to measure
this.
Gert-Jan
> J.|||On Wed, 17 Aug 2005 01:03:45 +0200, Gert-Jan Strik
<sorry@.toomuchspamalready.nl> wrote:
>> Me again, with yet another question/rant about performance tuning.
>> Is there a CPU cost, as reported to profiler, when a read is physical
>> and not logical?
>The logical read is always there. The result of the physical read is
>consumed through the data cache buffer.
>My guess is that the CPU cost of a physical read depends on your I/O
>subsystem.
OK, but I'd sort of assumed (!) it was zero cost, as far as the
profiler would know or care. My bad.
>> Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
>> guess), start QA, start profiler, run big, fat query, takes 75 seconds
>> (including several recompiles), including 200k reads and 42 seconds of
>> CPU. The difference in time is presumably (and logically, from a cold
>> start) physical disk I/O.
>> Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
>> is only 6 seconds.
>> Even if the compile-times all counted as CPU (does it?), that would
>> only be about 24 seconds, not enough to explain the earlier 42.
>Let me get this straight. First run has an elapsed time of 75 seconds,
>the second 26 seconds. Could be. Much of the data will be cached. If you
>have a large resultset, then the network traffic would explain the
>larger part of the remaining 26 seconds.
Small resultset, 30 small records.
>So the CPU cost seems to be 42 seconds for the first and 6 for the
>seconds run. That probably means that the optimizer had a hard time
>determining the best query plan. The second run it undoubtedly used the
>cached query plan, leaving the other 5.9999 seconds to process the 200k
>data from the buffer cache.
I can't believe the optimizer had *that* hard a time. Once the data
is cached, I've *never* seen a recompile (based on changed code) take
more than a few seconds. So, the rest of that reading seems to be CPU
corresponding to physical IO.
Additional observation this morning, on a busy dev server, someone
doing a big, busy convert on database A, when I go to database B and
run what should be a quick query, the CPU and duration are both very
large. Looks like my query is getting charged for all sorts of
database loading overhead, that, again, I would have thought to be
invisible at the profiler level.
>> It sure seems like there is a correspondence between physical reads
>> and CPU time.
>There might be, but I don't think this is the proper method to measure
>this.
The only point I want to confirm here is that before one trusts in
profiler results, one should try to get a steady-state, or otherwise
normalize the load, because otherwise all sorts of noise is going to
distort the numbers. It's probably hopeless to measure or quantify it
further. It would be nice if Microsoft would document that these are
valid concerns, perhaps enhance the system so that these factors are
made visible or eliminated or something, though I guessing such
improvements might be very difficult!
J.|||jxstern wrote:
> The only point I want to confirm here is that before one trusts in
> profiler results, one should try to get a steady-state, or otherwise
> normalize the load, because otherwise all sorts of noise is going to
> distort the numbers. It's probably hopeless to measure or quantify it
> further. It would be nice if Microsoft would document that these are
> valid concerns, perhaps enhance the system so that these factors are
> made visible or eliminated or something, though I guessing such
> improvements might be very difficult!
>
That's not what I've observed. CPU should be pretty consistent given the
same execution plan and parameter set. It's possible physical reads
could add some overhead to the query. It's also possible the load on the
server is forcing SQL Server to use another execution plan. It would
help if you could check the execution plan to be sure it is the same
with and without load with the same parameters and then post the
Profiler results here.
David Gugick
Quest Software
www.imceda.com
www.quest.com
Showing posts with label reads. Show all posts
Showing posts with label reads. Show all posts
Saturday, February 25, 2012
CPU cost to physical reads?
Me again, with yet another question/rant about performance tuning.
Is there a CPU cost, as reported to profiler, when a read is physical
and not logical?
Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
guess), start QA, start profiler, run big, fat query, takes 75 seconds
(including several recompiles), including 200k reads and 42 seconds of
CPU. The difference in time is presumably (and logically, from a cold
start) physical disk I/O.
Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
is only 6 seconds.
Even if the compile-times all counted as CPU (does it?), that would
only be about 24 seconds, not enough to explain the earlier 42.
It sure seems like there is a correspondence between physical reads
and CPU time.
J.
see inline
jxstern wrote:
> Me again, with yet another question/rant about performance tuning.
> Is there a CPU cost, as reported to profiler, when a read is physical
> and not logical?
The logical read is always there. The result of the physical read is
consumed through the data cache buffer.
My guess is that the CPU cost of a physical read depends on your I/O
subsystem.
> Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
> guess), start QA, start profiler, run big, fat query, takes 75 seconds
> (including several recompiles), including 200k reads and 42 seconds of
> CPU. The difference in time is presumably (and logically, from a cold
> start) physical disk I/O.
> Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
> is only 6 seconds.
> Even if the compile-times all counted as CPU (does it?), that would
> only be about 24 seconds, not enough to explain the earlier 42.
Let me get this straight. First run has an elapsed time of 75 seconds,
the second 26 seconds. Could be. Much of the data will be cached. If you
have a large resultset, then the network traffic would explain the
larger part of the remaining 26 seconds.
So the CPU cost seems to be 42 seconds for the first and 6 for the
seconds run. That probably means that the optimizer had a hard time
determining the best query plan. The second run it undoubtedly used the
cached query plan, leaving the other 5.9999 seconds to process the 200k
data from the buffer cache.
> It sure seems like there is a correspondence between physical reads
> and CPU time.
There might be, but I don't think this is the proper method to measure
this.
Gert-Jan
> J.
|||On Wed, 17 Aug 2005 01:03:45 +0200, Gert-Jan Strik
<sorry@.toomuchspamalready.nl> wrote:
>The logical read is always there. The result of the physical read is
>consumed through the data cache buffer.
>My guess is that the CPU cost of a physical read depends on your I/O
>subsystem.
OK, but I'd sort of assumed (!) it was zero cost, as far as the
profiler would know or care. My bad.
>Let me get this straight. First run has an elapsed time of 75 seconds,
>the second 26 seconds. Could be. Much of the data will be cached. If you
>have a large resultset, then the network traffic would explain the
>larger part of the remaining 26 seconds.
Small resultset, 30 small records.
>So the CPU cost seems to be 42 seconds for the first and 6 for the
>seconds run. That probably means that the optimizer had a hard time
>determining the best query plan. The second run it undoubtedly used the
>cached query plan, leaving the other 5.9999 seconds to process the 200k
>data from the buffer cache.
I can't believe the optimizer had *that* hard a time. Once the data
is cached, I've *never* seen a recompile (based on changed code) take
more than a few seconds. So, the rest of that reading seems to be CPU
corresponding to physical IO.
Additional observation this morning, on a busy dev server, someone
doing a big, busy convert on database A, when I go to database B and
run what should be a quick query, the CPU and duration are both very
large. Looks like my query is getting charged for all sorts of
database loading overhead, that, again, I would have thought to be
invisible at the profiler level.
>There might be, but I don't think this is the proper method to measure
>this.
The only point I want to confirm here is that before one trusts in
profiler results, one should try to get a steady-state, or otherwise
normalize the load, because otherwise all sorts of noise is going to
distort the numbers. It's probably hopeless to measure or quantify it
further. It would be nice if Microsoft would document that these are
valid concerns, perhaps enhance the system so that these factors are
made visible or eliminated or something, though I guessing such
improvements might be very difficult!
J.
|||jxstern wrote:
> The only point I want to confirm here is that before one trusts in
> profiler results, one should try to get a steady-state, or otherwise
> normalize the load, because otherwise all sorts of noise is going to
> distort the numbers. It's probably hopeless to measure or quantify it
> further. It would be nice if Microsoft would document that these are
> valid concerns, perhaps enhance the system so that these factors are
> made visible or eliminated or something, though I guessing such
> improvements might be very difficult!
>
That's not what I've observed. CPU should be pretty consistent given the
same execution plan and parameter set. It's possible physical reads
could add some overhead to the query. It's also possible the load on the
server is forcing SQL Server to use another execution plan. It would
help if you could check the execution plan to be sure it is the same
with and without load with the same parameters and then post the
Profiler results here.
David Gugick
Quest Software
www.imceda.com
www.quest.com
Is there a CPU cost, as reported to profiler, when a read is physical
and not logical?
Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
guess), start QA, start profiler, run big, fat query, takes 75 seconds
(including several recompiles), including 200k reads and 42 seconds of
CPU. The difference in time is presumably (and logically, from a cold
start) physical disk I/O.
Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
is only 6 seconds.
Even if the compile-times all counted as CPU (does it?), that would
only be about 24 seconds, not enough to explain the earlier 42.
It sure seems like there is a correspondence between physical reads
and CPU time.
J.
see inline
jxstern wrote:
> Me again, with yet another question/rant about performance tuning.
> Is there a CPU cost, as reported to profiler, when a read is physical
> and not logical?
The logical read is always there. The result of the physical read is
consumed through the data cache buffer.
My guess is that the CPU cost of a physical read depends on your I/O
subsystem.
> Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
> guess), start QA, start profiler, run big, fat query, takes 75 seconds
> (including several recompiles), including 200k reads and 42 seconds of
> CPU. The difference in time is presumably (and logically, from a cold
> start) physical disk I/O.
> Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
> is only 6 seconds.
> Even if the compile-times all counted as CPU (does it?), that would
> only be about 24 seconds, not enough to explain the earlier 42.
Let me get this straight. First run has an elapsed time of 75 seconds,
the second 26 seconds. Could be. Much of the data will be cached. If you
have a large resultset, then the network traffic would explain the
larger part of the remaining 26 seconds.
So the CPU cost seems to be 42 seconds for the first and 6 for the
seconds run. That probably means that the optimizer had a hard time
determining the best query plan. The second run it undoubtedly used the
cached query plan, leaving the other 5.9999 seconds to process the 200k
data from the buffer cache.
> It sure seems like there is a correspondence between physical reads
> and CPU time.
There might be, but I don't think this is the proper method to measure
this.
Gert-Jan
> J.
|||On Wed, 17 Aug 2005 01:03:45 +0200, Gert-Jan Strik
<sorry@.toomuchspamalready.nl> wrote:
>The logical read is always there. The result of the physical read is
>consumed through the data cache buffer.
>My guess is that the CPU cost of a physical read depends on your I/O
>subsystem.
OK, but I'd sort of assumed (!) it was zero cost, as far as the
profiler would know or care. My bad.
>Let me get this straight. First run has an elapsed time of 75 seconds,
>the second 26 seconds. Could be. Much of the data will be cached. If you
>have a large resultset, then the network traffic would explain the
>larger part of the remaining 26 seconds.
Small resultset, 30 small records.
>So the CPU cost seems to be 42 seconds for the first and 6 for the
>seconds run. That probably means that the optimizer had a hard time
>determining the best query plan. The second run it undoubtedly used the
>cached query plan, leaving the other 5.9999 seconds to process the 200k
>data from the buffer cache.
I can't believe the optimizer had *that* hard a time. Once the data
is cached, I've *never* seen a recompile (based on changed code) take
more than a few seconds. So, the rest of that reading seems to be CPU
corresponding to physical IO.
Additional observation this morning, on a busy dev server, someone
doing a big, busy convert on database A, when I go to database B and
run what should be a quick query, the CPU and duration are both very
large. Looks like my query is getting charged for all sorts of
database loading overhead, that, again, I would have thought to be
invisible at the profiler level.
>There might be, but I don't think this is the proper method to measure
>this.
The only point I want to confirm here is that before one trusts in
profiler results, one should try to get a steady-state, or otherwise
normalize the load, because otherwise all sorts of noise is going to
distort the numbers. It's probably hopeless to measure or quantify it
further. It would be nice if Microsoft would document that these are
valid concerns, perhaps enhance the system so that these factors are
made visible or eliminated or something, though I guessing such
improvements might be very difficult!
J.
|||jxstern wrote:
> The only point I want to confirm here is that before one trusts in
> profiler results, one should try to get a steady-state, or otherwise
> normalize the load, because otherwise all sorts of noise is going to
> distort the numbers. It's probably hopeless to measure or quantify it
> further. It would be nice if Microsoft would document that these are
> valid concerns, perhaps enhance the system so that these factors are
> made visible or eliminated or something, though I guessing such
> improvements might be very difficult!
>
That's not what I've observed. CPU should be pretty consistent given the
same execution plan and parameter set. It's possible physical reads
could add some overhead to the query. It's also possible the load on the
server is forcing SQL Server to use another execution plan. It would
help if you could check the execution plan to be sure it is the same
with and without load with the same parameters and then post the
Profiler results here.
David Gugick
Quest Software
www.imceda.com
www.quest.com
CPU cost to physical reads?
Me again, with yet another question/rant about performance tuning.
Is there a CPU cost, as reported to profiler, when a read is physical
and not logical?
Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
guess), start QA, start profiler, run big, fat query, takes 75 seconds
(including several recompiles), including 200k reads and 42 seconds of
CPU. The difference in time is presumably (and logically, from a cold
start) physical disk I/O.
Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
is only 6 seconds.
Even if the compile-times all counted as CPU (does it?), that would
only be about 24 seconds, not enough to explain the earlier 42.
It sure seems like there is a correspondence between physical reads
and CPU time.
J.see inline
jxstern wrote:
> Me again, with yet another question/rant about performance tuning.
> Is there a CPU cost, as reported to profiler, when a read is physical
> and not logical?
The logical read is always there. The result of the physical read is
consumed through the data cache buffer.
My guess is that the CPU cost of a physical read depends on your I/O
subsystem.
> Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
> guess), start QA, start profiler, run big, fat query, takes 75 seconds
> (including several recompiles), including 200k reads and 42 seconds of
> CPU. The difference in time is presumably (and logically, from a cold
> start) physical disk I/O.
> Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
> is only 6 seconds.
> Even if the compile-times all counted as CPU (does it?), that would
> only be about 24 seconds, not enough to explain the earlier 42.
Let me get this straight. First run has an elapsed time of 75 seconds,
the second 26 seconds. Could be. Much of the data will be cached. If you
have a large resultset, then the network traffic would explain the
larger part of the remaining 26 seconds.
So the CPU cost seems to be 42 seconds for the first and 6 for the
seconds run. That probably means that the optimizer had a hard time
determining the best query plan. The second run it undoubtedly used the
cached query plan, leaving the other 5.9999 seconds to process the 200k
data from the buffer cache.
> It sure seems like there is a correspondence between physical reads
> and CPU time.
There might be, but I don't think this is the proper method to measure
this.
Gert-Jan
> J.|||On Wed, 17 Aug 2005 01:03:45 +0200, Gert-Jan Strik
<sorry@.toomuchspamalready.nl> wrote:
>The logical read is always there. The result of the physical read is
>consumed through the data cache buffer.
>My guess is that the CPU cost of a physical read depends on your I/O
>subsystem.
OK, but I'd sort of assumed (!) it was zero cost, as far as the
profiler would know or care. My bad.
>Let me get this straight. First run has an elapsed time of 75 seconds,
>the second 26 seconds. Could be. Much of the data will be cached. If you
>have a large resultset, then the network traffic would explain the
>larger part of the remaining 26 seconds.
Small resultset, 30 small records.
>So the CPU cost seems to be 42 seconds for the first and 6 for the
>seconds run. That probably means that the optimizer had a hard time
>determining the best query plan. The second run it undoubtedly used the
>cached query plan, leaving the other 5.9999 seconds to process the 200k
>data from the buffer cache.
I can't believe the optimizer had *that* hard a time. Once the data
is cached, I've *never* seen a recompile (based on changed code) take
more than a few seconds. So, the rest of that reading seems to be CPU
corresponding to physical IO.
Additional observation this morning, on a busy dev server, someone
doing a big, busy convert on database A, when I go to database B and
run what should be a quick query, the CPU and duration are both very
large. Looks like my query is getting charged for all sorts of
database loading overhead, that, again, I would have thought to be
invisible at the profiler level.
>There might be, but I don't think this is the proper method to measure
>this.
The only point I want to confirm here is that before one trusts in
profiler results, one should try to get a steady-state, or otherwise
normalize the load, because otherwise all sorts of noise is going to
distort the numbers. It's probably hopeless to measure or quantify it
further. It would be nice if Microsoft would document that these are
valid concerns, perhaps enhance the system so that these factors are
made visible or eliminated or something, though I guessing such
improvements might be very difficult!
J.|||jxstern wrote:
> The only point I want to confirm here is that before one trusts in
> profiler results, one should try to get a steady-state, or otherwise
> normalize the load, because otherwise all sorts of noise is going to
> distort the numbers. It's probably hopeless to measure or quantify it
> further. It would be nice if Microsoft would document that these are
> valid concerns, perhaps enhance the system so that these factors are
> made visible or eliminated or something, though I guessing such
> improvements might be very difficult!
>
That's not what I've observed. CPU should be pretty consistent given the
same execution plan and parameter set. It's possible physical reads
could add some overhead to the query. It's also possible the load on the
server is forcing SQL Server to use another execution plan. It would
help if you could check the execution plan to be sure it is the same
with and without load with the same parameters and then post the
Profiler results here.
David Gugick
Quest Software
www.imceda.com
www.quest.com
Is there a CPU cost, as reported to profiler, when a read is physical
and not logical?
Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
guess), start QA, start profiler, run big, fat query, takes 75 seconds
(including several recompiles), including 200k reads and 42 seconds of
CPU. The difference in time is presumably (and logically, from a cold
start) physical disk I/O.
Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
is only 6 seconds.
Even if the compile-times all counted as CPU (does it?), that would
only be about 24 seconds, not enough to explain the earlier 42.
It sure seems like there is a correspondence between physical reads
and CPU time.
J.see inline
jxstern wrote:
> Me again, with yet another question/rant about performance tuning.
> Is there a CPU cost, as reported to profiler, when a read is physical
> and not logical?
The logical read is always there. The result of the physical read is
consumed through the data cache buffer.
My guess is that the CPU cost of a physical read depends on your I/O
subsystem.
> Boot up machine (workstation, XP, SQL2K sp0 yes I should upgrade it I
> guess), start QA, start profiler, run big, fat query, takes 75 seconds
> (including several recompiles), including 200k reads and 42 seconds of
> CPU. The difference in time is presumably (and logically, from a cold
> start) physical disk I/O.
> Rerun query, runs in 26 seconds, no recompiles (cached?), but the CPU
> is only 6 seconds.
> Even if the compile-times all counted as CPU (does it?), that would
> only be about 24 seconds, not enough to explain the earlier 42.
Let me get this straight. First run has an elapsed time of 75 seconds,
the second 26 seconds. Could be. Much of the data will be cached. If you
have a large resultset, then the network traffic would explain the
larger part of the remaining 26 seconds.
So the CPU cost seems to be 42 seconds for the first and 6 for the
seconds run. That probably means that the optimizer had a hard time
determining the best query plan. The second run it undoubtedly used the
cached query plan, leaving the other 5.9999 seconds to process the 200k
data from the buffer cache.
> It sure seems like there is a correspondence between physical reads
> and CPU time.
There might be, but I don't think this is the proper method to measure
this.
Gert-Jan
> J.|||On Wed, 17 Aug 2005 01:03:45 +0200, Gert-Jan Strik
<sorry@.toomuchspamalready.nl> wrote:
>The logical read is always there. The result of the physical read is
>consumed through the data cache buffer.
>My guess is that the CPU cost of a physical read depends on your I/O
>subsystem.
OK, but I'd sort of assumed (!) it was zero cost, as far as the
profiler would know or care. My bad.
>Let me get this straight. First run has an elapsed time of 75 seconds,
>the second 26 seconds. Could be. Much of the data will be cached. If you
>have a large resultset, then the network traffic would explain the
>larger part of the remaining 26 seconds.
Small resultset, 30 small records.
>So the CPU cost seems to be 42 seconds for the first and 6 for the
>seconds run. That probably means that the optimizer had a hard time
>determining the best query plan. The second run it undoubtedly used the
>cached query plan, leaving the other 5.9999 seconds to process the 200k
>data from the buffer cache.
I can't believe the optimizer had *that* hard a time. Once the data
is cached, I've *never* seen a recompile (based on changed code) take
more than a few seconds. So, the rest of that reading seems to be CPU
corresponding to physical IO.
Additional observation this morning, on a busy dev server, someone
doing a big, busy convert on database A, when I go to database B and
run what should be a quick query, the CPU and duration are both very
large. Looks like my query is getting charged for all sorts of
database loading overhead, that, again, I would have thought to be
invisible at the profiler level.
>There might be, but I don't think this is the proper method to measure
>this.
The only point I want to confirm here is that before one trusts in
profiler results, one should try to get a steady-state, or otherwise
normalize the load, because otherwise all sorts of noise is going to
distort the numbers. It's probably hopeless to measure or quantify it
further. It would be nice if Microsoft would document that these are
valid concerns, perhaps enhance the system so that these factors are
made visible or eliminated or something, though I guessing such
improvements might be very difficult!
J.|||jxstern wrote:
> The only point I want to confirm here is that before one trusts in
> profiler results, one should try to get a steady-state, or otherwise
> normalize the load, because otherwise all sorts of noise is going to
> distort the numbers. It's probably hopeless to measure or quantify it
> further. It would be nice if Microsoft would document that these are
> valid concerns, perhaps enhance the system so that these factors are
> made visible or eliminated or something, though I guessing such
> improvements might be very difficult!
>
That's not what I've observed. CPU should be pretty consistent given the
same execution plan and parameter set. It's possible physical reads
could add some overhead to the query. It's also possible the load on the
server is forcing SQL Server to use another execution plan. It would
help if you could check the execution plan to be sure it is the same
with and without load with the same parameters and then post the
Profiler results here.
David Gugick
Quest Software
www.imceda.com
www.quest.com
cpu + f(reads) = duration??
OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
--
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
--
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom line
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
--
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
--
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom line
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
cpu + f(reads) = duration??
OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!?
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
Josh
Hi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!?
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>
|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!?
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>
|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom line
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!?
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
Josh
Hi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!?
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>
|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!?
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>
|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom line
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
cpu + f(reads) = duration??
OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom lin
e
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom lin
e
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh
Subscribe to:
Posts (Atom)