How do you create a new cube, that only exists of linked objects. As i see it, you can only create a cube by selecting source, fact table etc. - and then afterwards add linked objects to the cube....
Hmm seems like the easiest way is to create a cube, is to select at fact and dimension - finish and the delete those again. Then you have e "blank" cube and can add the linked objects....Tuesday, March 20, 2012
Create a cube in analysis services out of more than one fact table.
more than one fact table as long as the fact tables share the same data
grain and dimensions?
Thanks for any advice.
Regards,
Anthony
Hi Anthony,
In AS 2005, this is support via the UDM.
In AS 2000, create two Cubes (one from each fact table) then create a
virtual cube to link the two.
If the facts are at the same grain and have the same dimensions, consider
moving the measures into one fact table.
regards
Jamie
"anthony garcia" <acgarcia21@.verizon.net> wrote in message
news:ZISch.7793$Ga7.2309@.trnddc01...
> Is it possible to create a cube in analysis services that has columns from
> more than one fact table as long as the fact tables share the same data
> grain and dimensions?
> Thanks for any advice.
> Regards,
> Anthony
>
Create a cube in analysis services out of more than one fact table.
more than one fact table as long as the fact tables share the same data
grain and dimensions?
Thanks for any advice.
Regards,
AnthonyHi Anthony,
In AS 2005, this is support via the UDM.
In AS 2000, create two Cubes (one from each fact table) then create a
virtual cube to link the two.
If the facts are at the same grain and have the same dimensions, consider
moving the measures into one fact table.
regards
Jamie
"anthony garcia" <acgarcia21@.verizon.net> wrote in message
news:ZISch.7793$Ga7.2309@.trnddc01...
> Is it possible to create a cube in analysis services that has columns from
> more than one fact table as long as the fact tables share the same data
> grain and dimensions?
> Thanks for any advice.
> Regards,
> Anthony
>
Wednesday, March 7, 2012
CPU Usage
pivot-table style operation?
I have about 350million rows of fact data in my cube (13 weeks of Point of
Sale information), I have three measures and four dimensions, one with
multiple levels. One of my measures is a distinct store count. I created
ALL the aggregations (100%). Also, the cube is partitioned by week, by sale
type. For example, one partition will hold all regular sales for one week,
another partition will hold all promotional sales for the same week. So, I
have 13(weeks)x12(sales types) partitions.
When I create a pivot table in excel linked to this cube, as I rotate and
drill through, the server (dual p3-1ghz, 1gig ram, ultra3 scsi), never
really peaks, in fact, i'm rarely seeing CPU usage over 10%.
I also have a question about cube population. During the data pull from the
server, the CPU is pegged at 100%, but during aggregation creation, the cpu
is around 10% again. Is there anyway to make the pivot table quereies and
the aggregations faster?
BTW: I have a multi-threaded partition build tool and that doens't move the
CPU over 10% either. I've managed to make the processing go faster by
staggering the threads so 2-3 are loading data and 2-3 are building
aggregations at any one time.
Thanks-
MalcolmHi Malcolm,
I assume you're talking about CPU usage on the server, but what does it look
like on the client? Also, what kind of queries are you running? Are they
getting data at the upper levels of your dimensions or at the leaf levels?
It sounds like disk I/O *might* be the bottleneck here. I assume you've read
the appropriate sections in the Analysis Services Performance Guide on this
subject:
/url]
Two other comments:
- You shouldn't have a distinct count measure in a cube with other measures
- it should be in a cube on its own, and you should then use a virtual cube
to create a unified view of all your measures. Again, the AS Performance
Guide contains more details on why this is.
- It's never worth doing 100% aggregations; you're probably adding a lot of
time to your processing for little effect. At the same time, it could be tha
t
you're not even building the aggregations you actually need, especially if
you are querying at the lower levels of dimensions. I wouldn't recommend
building aggregations manually unless you know what you're doing, but you
might find the following post on my blog interesting if you want to see
whether your queries are hitting aggregations or not:
[url]http://spaces.msn.com/members/cwebbbi/Blog/cns" target="_blank">http://www.microsoft.com/technet/pr...webbbi/Blog/cns!1pi7ETChsJ1un_2s41jm9Iyg!119.
entry
HTH,
Chris
"Malcolm Toon" wrote:
> I was just wondering why my CPU usage never really goes over 10% during a
> pivot-table style operation?
> I have about 350million rows of fact data in my cube (13 weeks of Point of
> Sale information), I have three measures and four dimensions, one with
> multiple levels. One of my measures is a distinct store count. I created
> ALL the aggregations (100%). Also, the cube is partitioned by week, by sa
le
> type. For example, one partition will hold all regular sales for one week
,
> another partition will hold all promotional sales for the same week. So,
I
> have 13(weeks)x12(sales types) partitions.
> When I create a pivot table in excel linked to this cube, as I rotate and
> drill through, the server (dual p3-1ghz, 1gig ram, ultra3 scsi), never
> really peaks, in fact, i'm rarely seeing CPU usage over 10%.
> I also have a question about cube population. During the data pull from t
he
> server, the CPU is pegged at 100%, but during aggregation creation, the cp
u
> is around 10% again. Is there anyway to make the pivot table quereies and
> the aggregations faster?
> BTW: I have a multi-threaded partition build tool and that doens't move th
e
> CPU over 10% either. I've managed to make the processing go faster by
> staggering the threads so 2-3 are loading data and 2-3 are building
> aggregations at any one time.
> Thanks-
> Malcolm
>
>
Sunday, February 19, 2012
Counting Rows
I want to define a measure which is count of rows in a fact table satisfying certain criteria. I just need a single value and hence this measure need not be additive from any dimension.
There are 2 possible scenarios -
(1) The criteria involves columns from the fact table itself (Example: [Fact Table].[Transaction Type] = 1 or 2 or 3)
(2) The criteria involves columns from dimension tables (Example: [Dimension Table].[Transaction Category] = 'D')
I am novice to SS BI and I have tried-out various solutions involving MDX Queries, Calculated Member, Named Set etc. but not successful. Kindly suggest me, how to achieve this.
Thanks & Regards.
Is my question so naive?|||Maybe the best solution (for performance and smart use) should be to create an additive measure.
Try it in this way:
1) Create a Named Calculation in your fact table in your data source view
2) In the Expression write a CASE to test your condition to have 1=True and Null=False, something like this:
CASE
WHEN ([Transaction Type] = 1) THEN 1
WHEN ([Transaction Type] = 2) THEN 1
WHEN ([Transaction Type] = 3) THEN 1
WHEN ([Transaction Category] = 'D') THEN 1
ELSE NULL
END
3) In your cube create an additive measure based on your Named Calculation
Now you should use directly your measure in every MDX query.
Let me know if it work fine.
Francesco
|||Another approach is to create a row count measure and create a dimension based on Transation Type in addition to the with the dimension containing Transaction Category. You can then filter the count on Transaction Type and Transaction Category at run-time. Or, if you never wanted to see a certain type of transaction, you could put in a named query in your DSV or a query binding in your partition that filters out those rows.Friday, February 17, 2012
Counting Rows
I want to define a measure which is count of rows in a fact table satisfying certain criteria. I just need a single value and hence this measure need not be additive from any dimension.
There are 2 possible scenarios -
(1) The criteria involves columns from the fact table itself (Example: [Fact Table].[Transaction Type] = 1 or 2 or 3)
(2) The criteria involves columns from dimension tables (Example: [Dimension Table].[Transaction Category] = 'D')
I am novice to SS BI and I have tried-out various solutions involving MDX Queries, Calculated Member, Named Set etc. but not successful. Kindly suggest me, how to achieve this.
Thanks & Regards.
Is my question so naive?|||Maybe the best solution (for performance and smart use) should be to create an additive measure.
Try it in this way:
1) Create a Named Calculation in your fact table in your data source view
2) In the Expression write a CASE to test your condition to have 1=True and Null=False, something like this:
CASE
WHEN ([Transaction Type] = 1) THEN 1
WHEN ([Transaction Type] = 2) THEN 1
WHEN ([Transaction Type] = 3) THEN 1
WHEN ([Transaction Category] = 'D') THEN 1
ELSE NULL
END
3) In your cube create an additive measure based on your Named Calculation
Now you should use directly your measure in every MDX query.
Let me know if it work fine.
Francesco
|||Another approach is to create a row count measure and create a dimension based on Transation Type in addition to the with the dimension containing Transaction Category. You can then filter the count on Transaction Type and Transaction Category at run-time. Or, if you never wanted to see a certain type of transaction, you could put in a named query in your DSV or a query binding in your partition that filters out those rows.