Wednesday, March 7, 2012

CPU Usage

I was just wondering why my CPU usage never really goes over 10% during a
pivot-table style operation?
I have about 350million rows of fact data in my cube (13 weeks of Point of
Sale information), I have three measures and four dimensions, one with
multiple levels. One of my measures is a distinct store count. I created
ALL the aggregations (100%). Also, the cube is partitioned by week, by sale
type. For example, one partition will hold all regular sales for one week,
another partition will hold all promotional sales for the same week. So, I
have 13(weeks)x12(sales types) partitions.
When I create a pivot table in excel linked to this cube, as I rotate and
drill through, the server (dual p3-1ghz, 1gig ram, ultra3 scsi), never
really peaks, in fact, i'm rarely seeing CPU usage over 10%.
I also have a question about cube population. During the data pull from the
server, the CPU is pegged at 100%, but during aggregation creation, the cpu
is around 10% again. Is there anyway to make the pivot table quereies and
the aggregations faster?
BTW: I have a multi-threaded partition build tool and that doens't move the
CPU over 10% either. I've managed to make the processing go faster by
staggering the threads so 2-3 are loading data and 2-3 are building
aggregations at any one time.
Thanks-
MalcolmHi Malcolm,
I assume you're talking about CPU usage on the server, but what does it look
like on the client? Also, what kind of queries are you running? Are they
getting data at the upper levels of your dimensions or at the leaf levels?
It sounds like disk I/O *might* be the bottleneck here. I assume you've read
the appropriate sections in the Analysis Services Performance Guide on this
subject:
/url]
Two other comments:
- You shouldn't have a distinct count measure in a cube with other measures
- it should be in a cube on its own, and you should then use a virtual cube
to create a unified view of all your measures. Again, the AS Performance
Guide contains more details on why this is.
- It's never worth doing 100% aggregations; you're probably adding a lot of
time to your processing for little effect. At the same time, it could be tha
t
you're not even building the aggregations you actually need, especially if
you are querying at the lower levels of dimensions. I wouldn't recommend
building aggregations manually unless you know what you're doing, but you
might find the following post on my blog interesting if you want to see
whether your queries are hitting aggregations or not:
[url]http://spaces.msn.com/members/cwebbbi/Blog/cns" target="_blank">http://www.microsoft.com/technet/pr...webbbi/Blog/cns!1pi7ETChsJ1un_2s41jm9Iyg!119.
entry
HTH,
Chris
"Malcolm Toon" wrote:

> I was just wondering why my CPU usage never really goes over 10% during a
> pivot-table style operation?
> I have about 350million rows of fact data in my cube (13 weeks of Point of
> Sale information), I have three measures and four dimensions, one with
> multiple levels. One of my measures is a distinct store count. I created
> ALL the aggregations (100%). Also, the cube is partitioned by week, by sa
le
> type. For example, one partition will hold all regular sales for one week
,
> another partition will hold all promotional sales for the same week. So,
I
> have 13(weeks)x12(sales types) partitions.
> When I create a pivot table in excel linked to this cube, as I rotate and
> drill through, the server (dual p3-1ghz, 1gig ram, ultra3 scsi), never
> really peaks, in fact, i'm rarely seeing CPU usage over 10%.
> I also have a question about cube population. During the data pull from t
he
> server, the CPU is pegged at 100%, but during aggregation creation, the cp
u
> is around 10% again. Is there anyway to make the pivot table quereies and
> the aggregations faster?
> BTW: I have a multi-threaded partition build tool and that doens't move th
e
> CPU over 10% either. I've managed to make the processing go faster by
> staggering the threads so 2-3 are loading data and 2-3 are building
> aggregations at any one time.
> Thanks-
> Malcolm
>
>

No comments:

Post a Comment