Showing posts with label users. Show all posts
Showing posts with label users. Show all posts

Thursday, March 29, 2012

create an incremental counter in the stored procedure

Hello, I have a following SP
I want to add an extra field "ranking" that just increments the row number.
Another feature would be: if several users have an equal totalvalue, they
should have an equal ranking number. the rankings following users would have
to be adjusted as well. thanks

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS OFF
GO

CREATE PROCEDURE [dbo].[Rankings]
@.iErrorCode int OUTPUT
AS

SELECT top 30
###COUNTER##,
[user],
[totalvalue], [cash], [stocksvalue]

FROM [dbo].[users]
ORDER BY totalvalue DESC

SELECT @.iErrorCode=@.@.ERROR

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GOThere is more than one way to rank a set with tied values.

CREATE TABLE Users (userid INTEGER PRIMARY KEY, totalvalue NUMERIC(10,2) NOT
NULL, cash NUMERIC(10,2) NOT NULL, stocksvalue NUMERIC(10,2) NOT NULL)

INSERT INTO Users VALUES (101,1010,100,99)
INSERT INTO Users VALUES (102,2020,100,99)
INSERT INTO Users VALUES (103,3030,100,99)
INSERT INTO Users VALUES (104,3030,100,99)
INSERT INTO Users VALUES (105,1002,100,99)
INSERT INTO Users VALUES (106,1002,100,99)
INSERT INTO Users VALUES (107,1002,100,99)
INSERT INTO Users VALUES (108,1002,100,99)
INSERT INTO Users VALUES (109,1000,100,99)

See if this gives the result you expect:

SELECT COUNT(U2.totalvalue)+1 AS ranking,
U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
FROM Users AS U1
LEFT JOIN Users AS U2
ON U1.totalvalue < U2.totalvalue
GROUP BY U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
ORDER BY ranking, U1.userid

Result:

ranking userid totalvalue cash stocksvalue
---- ---- ---- ---- ----
1 103 3030.00 100.00 99.00
1 104 3030.00 100.00 99.00
3 102 2020.00 100.00 99.00
4 101 1010.00 100.00 99.00
5 105 1002.00 100.00 99.00
5 106 1002.00 100.00 99.00
5 107 1002.00 100.00 99.00
5 108 1002.00 100.00 99.00
9 109 1000.00 100.00 99.00

(9 row(s) affected)

Or maybe this:

SELECT COUNT(DISTINCT U2.totalvalue) AS ranking,
U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
FROM Users AS U1
LEFT JOIN Users AS U2
ON U1.totalvalue <= U2.totalvalue
GROUP BY U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
ORDER BY ranking, U1.userid

Result:

ranking userid totalvalue cash stocksvalue
---- ---- ---- ---- ----
1 103 3030.00 100.00 99.00
1 104 3030.00 100.00 99.00
2 102 2020.00 100.00 99.00
3 101 1010.00 100.00 99.00
4 105 1002.00 100.00 99.00
4 106 1002.00 100.00 99.00
4 107 1002.00 100.00 99.00
4 108 1002.00 100.00 99.00
5 109 1000.00 100.00 99.00

(9 row(s) affected)

--
David Portas
SQL Server MVP
--|||thank you! it works fine.

"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> schrieb im
Newsbeitrag news:1pmdnW4hl-GFFt3dRVn-uA@.giganews.com...
> There is more than one way to rank a set with tied values.
> CREATE TABLE Users (userid INTEGER PRIMARY KEY, totalvalue NUMERIC(10,2)
NOT
> NULL, cash NUMERIC(10,2) NOT NULL, stocksvalue NUMERIC(10,2) NOT NULL)
> INSERT INTO Users VALUES (101,1010,100,99)
> INSERT INTO Users VALUES (102,2020,100,99)
> INSERT INTO Users VALUES (103,3030,100,99)
> INSERT INTO Users VALUES (104,3030,100,99)
> INSERT INTO Users VALUES (105,1002,100,99)
> INSERT INTO Users VALUES (106,1002,100,99)
> INSERT INTO Users VALUES (107,1002,100,99)
> INSERT INTO Users VALUES (108,1002,100,99)
> INSERT INTO Users VALUES (109,1000,100,99)
> See if this gives the result you expect:
> SELECT COUNT(U2.totalvalue)+1 AS ranking,
> U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
> FROM Users AS U1
> LEFT JOIN Users AS U2
> ON U1.totalvalue < U2.totalvalue
> GROUP BY U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
> ORDER BY ranking, U1.userid
> Result:
> ranking userid totalvalue cash stocksvalue
> ---- ---- ---- ---- ----
> 1 103 3030.00 100.00 99.00
> 1 104 3030.00 100.00 99.00
> 3 102 2020.00 100.00 99.00
> 4 101 1010.00 100.00 99.00
> 5 105 1002.00 100.00 99.00
> 5 106 1002.00 100.00 99.00
> 5 107 1002.00 100.00 99.00
> 5 108 1002.00 100.00 99.00
> 9 109 1000.00 100.00 99.00
> (9 row(s) affected)
> Or maybe this:
> SELECT COUNT(DISTINCT U2.totalvalue) AS ranking,
> U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
> FROM Users AS U1
> LEFT JOIN Users AS U2
> ON U1.totalvalue <= U2.totalvalue
> GROUP BY U1.userid, U1.totalvalue, U1.cash, U1.stocksvalue
> ORDER BY ranking, U1.userid
> Result:
> ranking userid totalvalue cash stocksvalue
> ---- ---- ---- ---- ----
> 1 103 3030.00 100.00 99.00
> 1 104 3030.00 100.00 99.00
> 2 102 2020.00 100.00 99.00
> 3 101 1010.00 100.00 99.00
> 4 105 1002.00 100.00 99.00
> 4 106 1002.00 100.00 99.00
> 4 107 1002.00 100.00 99.00
> 4 108 1002.00 100.00 99.00
> 5 109 1000.00 100.00 99.00
> (9 row(s) affected)
> --
> David Portas
> SQL Server MVP
> --

Tuesday, March 27, 2012

create a unique Random integer

Hi there,
I am trying to create a UID that is unique within my SQL Server. There are many users accessing the Server in seperate databases, but then I want to combine all the data from these tables, keeping the ID from each one as a primary key.

I have written the following function, but when i call it as a default value for a field, it does not produce a unique number.

CREATE FUNCTION GETNEXTID(@.CURDATE DATETIME)
RETURNS BIGINT
AS
BEGIN
RETURN (SELECT CAST(
CAST(DATEPART(YY,@.CURDATE) AS VARCHAR) +
RIGHT('0' + CAST(DATEPART(M,@.CURDATE) AS VARCHAR),2) +
RIGHT('0' + CAST(DATEPART(D,@.CURDATE) AS VARCHAR),2) +
RIGHT('0' + CAST(DATEPART(HH,@.CURDATE) AS VARCHAR),2) +
RIGHT('0' + CAST(DATEPART(SS,@.CURDATE) AS VARCHAR),2) +
RIGHT('00' + CAST(DATEPART(MS,@.CURDATE) AS VARCHAR),3) AS BIGINT))
END

Can anyone help?I would create a table as follows:

Create Table DatabaseId
(
DBId Int Identity,
DBName vachar (40) NOT NULL
)

When you import data from the different databases, prefix your unique identifier with DBId. That should give you an unique number across the DB's.|||Take a look at newid()

Tuesday, March 20, 2012

Create 3 users tables

Hello,
In my web site I have 2 types of users: students and professors.
All users need to login but the personal information for each user type
has different fields.
So I create 3 tables: professors, students and authentication,
"professors" will have all the personal data from professors.
"students" will have all the personal data from students.
authentication will have the username and password from all users.
How can i link authentication table to both professors and students
tables so when the user logins i get the data from that user, either a
professor or student?
And is the right way to do this?
I am using SQL 2005 and Asp.Net 2.0
Thanks,
MiguelOn Wed, 21 Dec 2005 23:10:00 +0000, Miguel Dias Moura wrote:

>Hello,
>In my web site I have 2 types of users: students and professors.
>All users need to login but the personal information for each user type
>has different fields.
>So I create 3 tables: professors, students and authentication,
>"professors" will have all the personal data from professors.
>"students" will have all the personal data from students.
>authentication will have the username and password from all users.
>How can i link authentication table to both professors and students
>tables so when the user logins i get the data from that user, either a
>professor or student?
>And is the right way to do this?
>I am using SQL 2005 and Asp.Net 2.0
>Thanks,
>Miguel
Hi Miguel,
I would use one Persons table to hold information that applies to both
professors and students. Specific information then goes in the students
and the professors tables.
Login information could go in the Persons table, but is probably better
placed in a seperate table (for reasons of security, but also for
flexibility - maybe, someone will someday need two seperate accounts,
then you'll be ready for it).
The tables Persons, Students and Professors all have the same primary
key; this same column is also foreign key in the Students and Professors
table, in both cases referencing the Persons table.
The Logins table has userid as primary key (of course) and has also a
foreign key that referenced the Persons table.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

Create 2 publications for same table

Hi

I created a DB named 'TestDB' and created a table called Users. This user table having 3 columns (Uname and pwd,version).

I need to create two publication for this user table.

1) Create a publication using all columns.

2) create a publication using Uname and pwd (not version column).

I am using Merge publication.

When I create first publication (any one - all 3 columns or any 2 coulmns) it create successfully.

When I create second publication it throws error. The details are below.

TITLE: New Publication Wizard

SQL Server Management Studio could not create article 'Users'.


ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

An article with a different subscriber_upload_options value already exists for object '[dbo].[Users]'.
Changed database context to 'TestDB'. (Microsoft SQL Server, Error: 20053)

How can i create the second publication? Is it possible? If yes, please give me the steps.

Thanks

Hi

Got the solution. Solved the problem.

The problem was when i create the second publication I changed the article synchronization direction. thats why it throws the error. After givig the same article synchronization direction it is working.

Sunday, March 11, 2012

CR10 ActiveX print orientation

Does anyone know how to set the page orientation of the print dialog on the users machine at runtime? Is it possible?
i.e. if you create a CR with landscape layout and the users default print setup is portrait the report will not print correctly.You need to define something called :

crep.PaperOrientation = 2 ; //crDefaultPaperOrientation

Where

crDefaultPaperOrientation= 0
crLandscape= 2
crPortrait = 1

Goto Crystal reports'd developer help and search for PaperOrientation or CRPaperOrientation :

Printer Settings
This application (some demo application) demonstrates how to change the report printer settings at runtime using code.

There are four new report properties that allow you to easily retrieve and set PaperSize, PaperOrientation, PaperSource, and PrinterDuplex for the report printer options. All of these are demonstrated.
There is also a new method called PrinterSetup that provides a Windows standard printer setup window to allow the user to change the printer properties directly at runtime. This is demonstrated as well.
The two methods are independent of each other. For example, changing the Windows standard printer setup will not alter the report printer settings and vice-versa. These new properties and the new method give you much more control over how the report is printed.


Thanks

:wave:

Thursday, March 8, 2012

CPU Utilization

Hi All,

I want to keep track of the CPU utilization & number of users connected for each database on our production box. I chose to get the data from sysprocesses table from master database.

But I realised that for some reason the master..sysprocesses.CPU column stays static or just keeps on adding to existing values.

Is there any ways thru which I can clear this data ( cpu column in sysprocesses table) after I have captured it in a table ?

Any help is appreciated.

Thanks.You may be better off using perfmon with those two counters. yOu can put the output to a .csv, for later import into databases. This will not give you CPU usage per connection, however, so it is somewhat useless if you want to do chargebacks of some sort.

CPU Usage(%), Logical IO Performed (%) Usage for Adhoc Queries is 90%

Hello, When I am seeing SQL Server 2005 Management studio Server Dashboard> I am seeing my(USERS) databases and msdb database usage is very small % of in CPU Usage(%), Logical IO Performed (%) Usage pie chart.

90% of Total cpu usage is showing for Adhoc Queries. what excatly this means in Dashboard? if application uses more than it would have shown in Database level or not?

sicerely this dashboard is good, if any one is watching daily, please advice their experiences here.

Thanks in advance. Hail SQL Server!

This means that of all the CPU and I/O performed 90% is coming from Adhoc Queries, it does not mean 90% CPU usage.

WesleyB

Visit my SQL Server weblog @. http://dis4ea.blogspot.com

CPU usage (HIGH, MEDIUM, LOW)

I have a Windows 2000 Advanced Server with SQL Server
2000 Enterprise Edition with SP3A.
The server has approximately 300 users and CPU usage
stays
states between 90 to 100 % for at least 200 minutes out
of a 12 hour processing windows in time.
What is a good metric for LOW, MEDIUIM, and HIGH CPU
usage rates.
Thanks,
Mark
Hi,
90-100% for more than 3 hours will be a problem. Idetify the process (might
be batch) causing the bottle neck.
If you find that there is no much optimization can be done in your program ,
probably go for 1 more additional CPU.
Low : Less than 20%
Medium : Between 21 % to 60%
High : Any thing above 60%
Thanks
Hari
MCDBA
"Mark" <anonymous@.discussions.microsoft.com> wrote in message
news:222e501c45d80$90292870$a601280a@.phx.gbl...
> I have a Windows 2000 Advanced Server with SQL Server
> 2000 Enterprise Edition with SP3A.
> The server has approximately 300 users and CPU usage
> stays
> states between 90 to 100 % for at least 200 minutes out
> of a 12 hour processing windows in time.
> What is a good metric for LOW, MEDIUIM, and HIGH CPU
> usage rates.
> Thanks,
> Mark
>
>
|||Hi,
90-100% for more than 3 hours will be a problem. Idetify the process (might
be batch) causing the bottle neck.
If you find that there is no much optimization can be done in your program ,
probably go for 1 more additional CPU.
Low : Less than 20%
Medium : Between 21 % to 60%
High : Any thing above 60%
Thanks
Hari
MCDBA
"Mark" <anonymous@.discussions.microsoft.com> wrote in message
news:222e501c45d80$90292870$a601280a@.phx.gbl...
> I have a Windows 2000 Advanced Server with SQL Server
> 2000 Enterprise Edition with SP3A.
> The server has approximately 300 users and CPU usage
> stays
> states between 90 to 100 % for at least 200 minutes out
> of a 12 hour processing windows in time.
> What is a good metric for LOW, MEDIUIM, and HIGH CPU
> usage rates.
> Thanks,
> Mark
>
>

CPU usage (HIGH, MEDIUM, LOW)

I have a Windows 2000 Advanced Server with SQL Server
2000 Enterprise Edition with SP3A.
The server has approximately 300 users and CPU usage
stays
states between 90 to 100 % for at least 200 minutes out
of a 12 hour processing windows in time.
What is a good metric for LOW, MEDIUIM, and HIGH CPU
usage rates.
Thanks,
MarkHi,
90-100% for more than 3 hours will be a problem. Idetify the process (might
be batch) causing the bottle neck.
If you find that there is no much optimization can be done in your program ,
probably go for 1 more additional CPU.
Low : Less than 20%
Medium : Between 21 % to 60%
High : Any thing above 60%
Thanks
Hari
MCDBA
"Mark" <anonymous@.discussions.microsoft.com> wrote in message
news:222e501c45d80$90292870$a601280a@.phx
.gbl...
> I have a Windows 2000 Advanced Server with SQL Server
> 2000 Enterprise Edition with SP3A.
> The server has approximately 300 users and CPU usage
> stays
> states between 90 to 100 % for at least 200 minutes out
> of a 12 hour processing windows in time.
> What is a good metric for LOW, MEDIUIM, and HIGH CPU
> usage rates.
> Thanks,
> Mark
>
>

CPU usage (HIGH, MEDIUM, LOW)

I have a Windows 2000 Advanced Server with SQL Server
2000 Enterprise Edition with SP3A.
The server has approximately 300 users and CPU usage
stays
states between 90 to 100 % for at least 200 minutes out
of a 12 hour processing windows in time.
What is a good metric for LOW, MEDIUIM, and HIGH CPU
usage rates.
Thanks,
MarkHi,
90-100% for more than 3 hours will be a problem. Idetify the process (might
be batch) causing the bottle neck.
If you find that there is no much optimization can be done in your program ,
probably go for 1 more additional CPU.
Low : Less than 20%
Medium : Between 21 % to 60%
High : Any thing above 60%
--
Thanks
Hari
MCDBA
"Mark" <anonymous@.discussions.microsoft.com> wrote in message
news:222e501c45d80$90292870$a601280a@.phx.gbl...
> I have a Windows 2000 Advanced Server with SQL Server
> 2000 Enterprise Edition with SP3A.
> The server has approximately 300 users and CPU usage
> stays
> states between 90 to 100 % for at least 200 minutes out
> of a 12 hour processing windows in time.
> What is a good metric for LOW, MEDIUIM, and HIGH CPU
> usage rates.
> Thanks,
> Mark
>
>

Saturday, February 25, 2012

cpu + f(reads) = duration??

OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
--
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
--
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom line
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh

cpu + f(reads) = duration??

OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!?
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
Josh
Hi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!?
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>
|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!?
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>
|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom line
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh

cpu + f(reads) = duration??

OK, I have an SP that users say runs slow. I take a look, it's taking
2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
I munge around the logic and cut the reads to 90k. The CPU rises
slightly to around 3.0. The duration is never below 5.0!'
OK, so WTF is SQLServer doing when it's not doing reads and not
cranking CPU?
This is on a dev server, there is some mild contention, but the
comparison of the original code to the modified code is pretty
constant. I tried throwing in a few NOLOCKS. I did add a #temp
table, which worked better than a @.tablevar because it does hold a
mighty 100 to 1000 rows and it autostats itself, I guess. Adding
explicit indexes to it seems to make things (much) worse.
Thanks.
JoshHi
Roughly, the difference between the CPU time and duration is your client
retrieving the data across the network.
How many rows are you pushing back, and can the client handle the volume?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||Cutting the reads down from 600K to 90K probably indicates you are doing
more seeks where as before you were doing scans. Seeks can often be more
CPU intensive since it really has a lot of work compared to a scan. It may
even be slower than a scan under the right or wrong conditions. Bottom line
is that 90K reads is still a lot of reads. But without actually knowing
more about the sp and the tables involved it is hard to say. Duration can
often be misleading do to things like blocking and disk bottlenecks.
Andrew J. Kelly SQL MVP
"jxstern" <jxstern@.nowhere.xyz> wrote in message
news:4j5hq1dsqfp23vrkmlur6pvl15hvv8oc6m@.
4ax.com...
> OK, I have an SP that users say runs slow. I take a look, it's taking
> 2.5 seconds of CPU and doing 600k reads to give a duration around 3.0.
> I munge around the logic and cut the reads to 90k. The CPU rises
> slightly to around 3.0. The duration is never below 5.0!'
> OK, so WTF is SQLServer doing when it's not doing reads and not
> cranking CPU?
> --
> This is on a dev server, there is some mild contention, but the
> comparison of the original code to the modified code is pretty
> constant. I tried throwing in a few NOLOCKS. I did add a #temp
> table, which worked better than a @.tablevar because it does hold a
> mighty 100 to 1000 rows and it autostats itself, I guess. Adding
> explicit indexes to it seems to make things (much) worse.
> Thanks.
> Josh
>|||On Tue, 20 Dec 2005 22:33:47 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>Cutting the reads down from 600K to 90K probably indicates you are doing
>more seeks where as before you were doing scans. Seeks can often be more
>CPU intensive since it really has a lot of work compared to a scan. It may
>even be slower than a scan under the right or wrong conditions. Bottom lin
e
>is that 90K reads is still a lot of reads. But without actually knowing
>more about the sp and the tables involved it is hard to say. Duration can
>often be misleading do to things like blocking and disk bottlenecks.
It returns about 1400 modest rows and we have lots of bandwidth, can't
take anything like two seconds, anyway it returns the same rows for
old code and new.
The data should be cached, I doubt there's any physical IO, and none
shows in the stats io display.
The old code writes about 4k rows to the #temp table, the new code
adds another 100 or so rows to another #temp table.
I suppose that seek versus scan could be the answer, I mean, something
has to be. Let me look again. Nooooo, I set profile on and the old
code has plenty of seeks, too, ...
... hoo boy. OK, the data will have been defragged overnight, and now
I'm seeing different behavior with the OLD code now taking more CPU
and more duration and about half the reads. I plain don't get it, but
without a repeatable situation to talk about, I can't see what else to
say or ask here.
Thanks for the suggestions, next time I have a similar anomaly (and
time to investigate) I think I'll look right at the scan/seek ratio or
other profile innards.
Josh

Sunday, February 19, 2012

Counting total number of items in each category

Hello everyone,

I have 2 tables. One with a list of countries and another with a list of users. The users table stores the Country they are from as well as other info.

Eg the structure is a little bit like this...

Countries:

--

CountryId

CountryName

Users:

UserId

UserName

CountryId

So, the question is how to a list all my countries with total users in each. Eg, so the results are something like this......

CountryName TotalUsers

United Kingdom 334

United States 1212

France 433

Spain 0

Any help woulld be great as I have been fumbling with this all morning. Im not 100% with SQL yet!!

Cheer

Stephen

here You go...

Code Snippet

Create Table #countries (

[CountryId] int ,

[CountryName] Varchar(100)

);

Insert Into #countries Values('1','USA');

Insert Into #countries Values('2','UK');

Insert Into #countries Values('3','IN');

Create Table #users (

[UserId] int ,

[UserName] Varchar(100) ,

[CountryId] int

);

Insert Into #users Values('1','John','1');

Insert Into #users Values('2','Dale','1');

Insert Into #users Values('3','Thome','2');

--With ZERO COUNT

Select

[CountryName],

Count([UserId])

from

#countries C

left Outer Join #users U on C.[CountryId] = U.[CountryId]

Group By

[CountryName]

--Without ZERO COUNT

Select

[CountryName],

Count([UserId])

from

#countries C

Inner Join #users U on C.[CountryId] = U.[CountryId]

Group By

[CountryName]

|||

Super! Many thanks and thank you for replying so quickly.

Makes sense now - easier than what I was trying to do!!!

Tuesday, February 14, 2012

counting field totals as a formula

Hi,

I'm trying to count field login totals for users which updates another field
in another table. Which is the most efficient method?

I don't want to use a standard query as it will take too long if there are
1000 users per company each with 1000 plus logins.

I was thinking in terms of either a function, or a formula (using the built-
in formula field within mssql).

the query though (as its the only way which i'm familiar) is:

SELECT SUM(NumberOfLogons) AS TotalLogons
FROM EmployerProfileDB39
WHERE (CompanyName = x) AND (EmployerID = y)

how would i write this as a formula or as a function?James M via SQLMonster.com (forum@.SQLMonster.com) writes:
> I'm trying to count field login totals for users which updates another
> field in another table. Which is the most efficient method?
> I don't want to use a standard query as it will take too long if there are
> 1000 users per company each with 1000 plus logins.

Depends on what you mean with a "standard query", but if you mean one
that is ANSI-compliant you are likely to be right.

> I was thinking in terms of either a function, or a formula (using the
> built- in formula field within mssql).

Ehum, there is no "build-in formula field" in SQL Server. There are
computed columns, which may appear as "formula fields" in Enterprise
Manager.

Anyway, I don't think a computed column would be a good idea, since that
would have to be a scalar UDF, and they are not known for being
performance boosters. You could index the column, but there is some
hassle with this.

Anyway, this query should take you a long way:

UPDATE tbl
SET logintotals = e.TotalLogons
FROM tbl t
JOIN (SELECT CompanyName, EmployerID,
SUM(NumberOfLogons) AS TotalLogons
FROM EmployerProfileDB39
GROUP BY CompanyName, EmployerID) AS e
ON t.CompanyName = e.CompanyName
AND t.EmployeeID = e.EmployeeID

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

counter inside select statement?

Hi, can you add a counter inside a select statement to get a unique id line of the rows?

In my forum i have a page that displays a users past posts in the forum, these are first sorted according to their topicID and then they are sorted after creation date. What i want is in the select statement is to create a counter to get a new numeric order.

This is the normal way:

SELECT id, post, comment, created FROM forum_posts WHERE (topicID = @.topicID) ... some more where/order by statements

This is what i want:

DECLARE @.tempCounter bigint
SET @.tempCounter = 0
SELECT @.tempCounter, id, post, comment, created FROM forum_posts WHERE (topicID = @.topicID)... some more where/order by statements and at the end.. (SELECT @.tempCounter = @.tempCounter + 1)

Anyone know if this can be done?

Use Row Num ( if you are using SQL SErver 2005 ) as

Select ROW_NUMBER() OVER(ORDER BY Some_Field ) AS rownum,
* from Table_Nane

|||

yes i am using MS SQL 2005, it worked for half of the problem, ill display the SP below, what the SP does is to select the posts from a single user, by ordering them first by topicID then by date created.
In order to bring back just what the client need, i incorperated a custom paging system.

DECLARE @.first_idint, @.startRowint
SET @.startRowIndex=(@.startRowIndex- 1)* @.maximumRows+ 1
IF @.startRowIndex= 0
SET @.startRowIndex= 1

SETROWCOUNT @.startRowIndex

SELECT @.first_id=ROW_NUMBER()OVER(ORDERBY forum_answer.topicidASC, forum_answer.idDESC)FROM forum_answerINNERJOIN forum_topicsON forum_topics.id= forum_answer.topicidINNERJOIN forum_boardON forum_board.id= forum_topics.boardidAND forum_board.hidden= 0INNERJOIN forumON forum.id= forum_board.forumidAND forum.hidden= 0RIGHTOUTERJOIN profile_publicinfoON profile_publicinfo.username= forum_answer.usernameWHERE(forum_answer.username= @.UserName)ORDERBY forum_answer.topicidASC, forum_answer.idDESC

SETROWCOUNT @.maximumRows

DECLARE @.tempVarint
SELECT @.tempVar=ROW_NUMBER()OVER(ORDERBY forum_answer.topicidASC, forum_answer.idDESC), forum_answer.topicid, forum_answer.id, forum_answer.username, forum_answer.answer, forum_answer.created, profile_publicinfo.signatureFROM forum_answerINNERJOIN forum_topicsON forum_topics.id= forum_answer.topicidINNERJOIN forum_boardON forum_board.id= forum_topics.boardidAND forum_board.hidden= 0INNERJOIN forumON forum.id= forum_board.forumidAND forum.hidden= 0RIGHTOUTERJOIN profile_publicinfoON profile_publicinfo.username= forum_answer.usernameWHERE(forum_answer.username= @.UserName)AND(@.first_id<= @.tempVar)ORDERBY forum_answer.topicidASC, forum_answer.idDESC

SETROWCOUNT 0

The first thing we do, is to get the id for the first post that should be returned. This works with the ROW_NUMBER method, the second select statement, takes xx number of rows with start from the position it recieves from the first select statement. So if @.first_id is 5 and maximumRows is 5, then the second select statement will only take rows 5->10 from the table.

The problem is the ROW_NUMBER inside the second select statement, sql screams if i put it at the end like (@.first_id <= ROW_NUMBER...

Msg 4108, Level 15, State 1, Procedure Forum_GetUserAnswers, Line 32
Windowed functions can only appear in the SELECT or ORDER BY clauses.

SQL also screams if i put it in the beginning (as shown above), then i get the following error:

Msg 141, Level 15, State 1, Procedure Forum_GetUserAnswers, Line 33
A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations.

Thats why i wanted a way to calculate the rows and get a ID according to that. Any idea of how to proceed?