Quantcast
Channel: SQL Server Analysis Services forum
Viewing all 14337 articles
Browse latest View live

Need help on cube design

$
0
0

Hi All,

I have a requirement for which i need input on the Cube design.

DWH tables are as follows
Transaction Table: This table contains employee performance details for each project on which they have worked on.
Id, ProjectId, ProjectStartdate, ProjectEnddate, EvaluationMonth, EmployeeId

Mapping Table: This table contains mapping between manager, role and employee
ManagerId, Roles(deliver head for Sharepoint, Deliver head for BI, Admin, HR, Accounts) , EmployeeID 

1) One Manager can play multiple role and based on the role, It can assess performance of different set of employee under his/her team
2) If a Manager is a deliver head of BI department, In this case It can access the performance of all employees who work on BI project
3) But at the same time It can also play a role of HR
4) So when it select "HR" in the role filter, It can see the performance of all employees in an organization as "HR" role can access all the employees

Requirement: 
I need to create a report from cube.
1) Report will have EvaluationMonth & Role  as a filter Selection
2) Report will show average rating for employees, based on the login user oversees.

So if i login and i am a delivery head for BI department and HR, Roles filter will show (delivery head for BI project & HR) and If i Select Delivery head for BI then Report show avg rating for all BI employees and if i select HR as a role then it should show me average rating for all the Employees as HR can access all the employees performance.

I guess it can be done using many to many mapping but i need little more information as in the past i have implemented many to many mapping when the mapping table has only two fields like (Accountid and Customerid) and in this case we have three fields not sure whether i need to split the above mapping table to two tables.

Any help is appreciated. Thank you


Error when processing dimension using XMLA script

$
0
0

I am hopeful that someone has an answer for this...

I am in the process of deploying SQLServer 2014 and have been running in parallel with my production environment for 2 weeks with no issues.  Yesterday morning, my primary data load job failed while executing a process dimension script.  The script does a process update on multiple dimensions, but fails on the product dimension.  This is the script:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"><Parallel><Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2"><Object><DatabaseID>WestLakeSales</DatabaseID><DimensionID>Dim Product</DimensionID></Object><Type>ProcessUpdate</Type><WriteBackTableCreation>UseExisting</WriteBackTableCreation></Process></Parallel></Batch>

It processes the attributes and starts processing the associated measure groups, then fails with a return code that I cannot find any information on:

<return xmlns="urn:schemas-microsoft-com:xml-analysis"><results xmlns="http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults"><root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"><Exception xmlns="urn:schemas-microsoft-com:xml-analysis:exception" /><Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception"><Error ErrorCode="3238133769" Description="" Source="Microsoft SQL Server 2014 Analysis Services" HelpFile="" /></Messages></root></results></return>

At the end of the job log it just shows this:

*****

Finished processing the 'Inventory Weekly 2009 08' partition.
Processing of the 'Inventory Weekly 2009 06' partition has started.
Finished processing the 'Inventory Weekly 2009 06' partition.
Processing of the 'Inventory Weekly 2010 02' partition has started.
The job completed with failure.
The job completed with failure.
The job completed with failure.

Execution complete

********

When I run process update from Studio it processes all of the attributes and measure group partitions, then throws a single 'error' result with no description or message.  It appears that all partitions process successfully.  See the screen shot:

This script ran fine up until yesterday.  There were no system changes.  The same script is currently running fine on the production SQLServer 2008 instance.

Anyone see anything like this happening in SSAS 2014?  Any tips on further trouble shooting?

error while processing dimension using xmla

$
0
0

Hi,

I am facing below error while incremental processing for dimensions. am using windows server 2012 R2 and SQL server 2014.

And in profiler trace also am not getting any error message.

I hope anyone of you faced this issue. please guide me on the same.

 <return xmlns="urn:schemas-microsoft-com:xml-analysis"><results xmlns="http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults"><root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"><Exception xmlns="urn:schemas-microsoft-com:xml-analysis:exception" /><Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception"><Error ErrorCode="3238133769" Description="" Source="Microsoft SQL Server 2014 Analysis Services" HelpFile="" /></Messages></root></results></return>.  The step failed.

Profiler trace.

Regards,

Sagar

SSAS Numa node - Hyper V

$
0
0

I am busy setting up our new DW environment running SQL 2016.  Our current environment is:

2 HP servers with 192GB RAM, 2 X 4 core CPU's.
An instance of SQL, SSAS MD, SSAS Tabular installed on a failover cluster
Normal operation would have the SQL on one server, and the 2 SSAS instances on the other server.

Now I know that SSAS does not work as well with NUMA nodes, so I would like to limit the SSAS nodes to a single NUMA node on a server.  The only way to do this in Server 2012 R2 is using Hyper V.  So the new environment would look like:
2 servers. 384GB RAM, 2 X 4 core CPU's.
SQL installed on the failover cluster as per usual
2 Hyper-V VM's to run the 2 SSAS instances, each limited to a different NUMA node.

Does this make sense?  Will it work?
How does Hyper-V allocate RAM, so if we have a problem with one server and everything is running on one node, will that cause a problem?

Thanks

Tail Function Performance Degradation

$
0
0

Hi,

I'm using the Tail Function to retrieve the last non empty Sale Date from the Cube.

But if I use the tail function the report takes more than 7 mintues to display the result.

Is there any way to improve the performance of Tail Function?

withmember [measures].[lastused]asExtract(tail(filter({[Sale Date].[Date].children },


not

isempty([Measures].[Sales Item Count])),

  1),

    [Sale Date].[Date]).

Item(0).Member_caption

Can anyone please help me on this?

Database size: SSMS vs Explorer

$
0
0

HI,

if i compare the sizes of databases between SSMS and Explorer I get for some Database huge differences:

Database 1:          SSMS: 495428 MB             Explorer: Size 483 GB  --> nearly same

Database 2:          SSMS: 83397 MB               Explorer: Size310 GB --> nearly 4 x bigger

They are on the same Server, same disk but on different instances.

Thanx

Upgrading ssas cube from ssas 2012 to ssas 2014

$
0
0
While upgrading a ssas cube from ssas 2012 to ssas 2014, what all performance benefit we can expect .This is any tool to upgrade a cube to 2014.

Pivot table becoming unresponsive in Direct Query mode

$
0
0

Hi ,

We are building SSAS tabular model in direct query mode using one of the base table which is having more than 87 million rows of data. Adding to this it has almost 70 to 80 columns and we have few more dimension tables also in our model.we ran into two problems.

1.Unable to retrieve the pivot table report while analyzing with Excel. After adding few columns to the report its becoming unresponsive.We are using Excel 32 bit version.Is any one ran into this problem before?if so could you please let us know any approach you have used ?

2. As we have millions of records in one of our filter..we tried to implement hierarchy for that filter and bring into the model to filter data.Unfortunately we could not see the hierarchies while browsing through any MDX tools like Excel if the model is in Direct query mode ..Please let me know any workaround for this.

Thanks,


How to deploy changes in Tabular Model and save existing partitions in tables that were not changed.

$
0
0

Hi!

I have a set of tables, they defined in SSDT with initial single partition, then I run ETL (AMO based) script once per day to load daily set of records. Each new day - new partition.

My problem, that when I make some changes in the Model (not related to tables with data, say add new table, - just as example) and try do deploy it, even with "Do Not Process" option, SSDT leave only first initial partition, and delete all others.

Is it the only way, how I can deploy changes to the existing Tabular Model, or may be there is some trick to avoid data reload after that?

I have a lot of data and such a process takes a few hours (and potentially days). I agree that if table itself or it's relationships were changed, then Table processing is not avoidable, but in my case, - I hope there is some way to make it easier...

Thanks 




problem with a calculated measure

$
0
0

Hello,

 

I have a problem when creating a calculated measure in a cube ssas
calculating that I am trying to do is

[Measures].[QUANTITY] * [Measures].[UNIT SALE PRICE]

but the calculation is made as follows in the cube

sum([QUANTITY]) * sum([UNIT SALE PRICE])

instead of that way
sum([QUANTITY] * [UNIT SALE PRICE])


is that someone can help me

MDX - % of Parent Row Total

$
0
0

Hello,

I need a MDX Script for a ratio feature, the feature exists in excel and is called "Show values as > % of Parent Row Total"

It should doesn't matter which Dimensions are selected, I want to see on every Row the ratio value from the lowest level up tothe highest (Total = 100%) and the total depends on my selected/filtered set, exactly like Excel.

Excel example:

http://i.stack.imgur.com/xQZPO.png

(2 Dimensions in this example, Date-Dimension and ArticeGroup-Dimension)

I am new in MDX and I know that this issue is a little bit complicated because we have to work dynamically with the axes and I found some example scripts, but they don't solve my problem.  


Internal error: An unexpected error occurred (file 'pffilestorefilegroup.cpp', line 1496

$
0
0

HI,

I' ve 2 dimensions in my cube with just 2 values each (Dim1 Show in thousands with "Original" and "in Thousands", Dim2 Invert Signs with "Original" and "Inverted") Both Dimensions are connected Regular via Named Calculations. We are also using Account intelligence with Custom Rollup formulas and unary operators

There are two Scopes:

Scope([Show in Thousands].[Show in Thousands].&[2]);

    This=[Show in Thousands].[Show in Thousands].&[1] /1000;

End Scope;

Scope([Invert Signs].[Invert Signs].&[2]);

    This=[Invert Signs].[Invert Signs].&[1] *(-1);

End Scope;

Now in my mdx query, when I have [Invert Signs].[Invert Signs].&[1] and [Show in Thousands].[Show in Thousands].&[1] in my where clause I got this error message:

Internal error: An unexpected error occurred (file 'pffilestorefilegroup.cpp', line 1496, function 'PFFileStoreGroup::ReadPage').

Server: The current operation was cancelled because another operation in the transaction failed.

With the Combinations 1,2 ; 2,1 ; 2,2 it works as expected.

Thanx in Advance

Averaging down a hierarchy

$
0
0

I have a requirement to perform a calculation by averaging down a hierarchy.  Currently this calculation is defined as shown below:

MEMBER ParentAverage AS

IIF( ISLEAF([MyDimension].[MyHierarchy].CURRENTMEMBER),

         [Measures].[ChildAverage],

         AVG([MyDimension].[MyHierarchy].CURRENTMEMBER.CHILDREN,

              [Measures].[ParentAverage] )

       )

This calculation gives me the expected results but I have started running into performance issues with it.  Given that this hierarchy only ever has two levels (i.e. parent, child) is there a better way of doing the same calculation?

How to propagate relationships for a calculated member in SSAS ?

$
0
0

I am stuck in a situation which I believe should have a ready solution because it looks like a common scenario to me. Any help is much appreciated.

I have a fact 'F' and a dimension 'D'. The relationship between D and F is many to many. I have modeled this relationship in my dimension usage using a bridge table 'B' and and intermediate dimension 'D1'. 

My fact F has both base measures(coming directly from the DSV) and some calculated members. When i browse in excel by dropping attributes from D and measures from F, I am getting expected results for my base measures. But calculated members is showing the grand total and wont breakup according to the many to many relationship as defined in cube. 

Note - I am assigning default values to my calculated members in the DSV. Final assignment to the calculated members happen under scoped assignment. 

Does SSAS not support relationships for calculated members? Too bad if it doesn't. Any workaround guys??

Amit Chandra

Problem with YTD calculation

$
0
0
Hello,
I'm currently trying to calculate YTD REVENUE  with the define dimension intelligence  but the result I got is still"NA"

someone can help me ?

SSAS High Availability in Azure (not SQL DBE)

$
0
0

Hello,

I was wondering what would be my options for having an SSAS environment with High Availability in Azure?

Please refrain from mentioning HA solutions for the SQL Database Engine itself - that isnot the issue at hand.

The issue is that as far as I know (and please correct me if I am wrong), the only way to provide SSAS HA is through clustering that involves share storage -- which is not available in Azure.

So I was wondering if anyone has come across such requirement before and if there are any sort of novel/original solutions(thinking outside the box).

Regards,
P.

How to eliminate ALL level from aggregation

$
0
0

Hello,

I have a simple MDX query:

WITH MEMBER X
 AS
AGGREGATE(NULL : [Audit Date].[Calendar].currentmember
, [Measures].[CountAccount])

 SELECT X on columns
 , [Audit Date].[Calender Year].MEMBERS ON rows
 FROM [Retention]

Result:

                  X
All Periods 781,742
2008        20,295
2009        56,942
2010        117,876
2011        181,559
2012        398,102
2013        478,801
2014        595,662
2015        743,582
2016        781,742
2017        781,742
2018        781,742
2019        781,742
2020        781,742

The current year is 2016, there is no data for 2017 and beyond, yet the query gives me data for all the years available in the date dimension. Any idea how to remove the ALL level and the years that don't belong in here from the calculation X? Thank you in advance!


South Florida Business Intelligence Developer



Adventure Words Sample

$
0
0

I have downloaded and installed the Sql Server 2014 AdventureWorks Multi dimensional solution.

My question is why in the DSV for the Mined Cube are the facts not colored in yellow? Usually the facts table objects are in yellow and the Dimensions are in blue.

The solution has 2 cubes and the Adventure Works cube has the facts in yellow.

Connecting Excel to SQL Server 2016 Analysis Services running in Azure VM

$
0
0

Hi

With the new launch of SQL Server 2016 i would like to hear what the best practice would be for accessing SSAS Tabular that run in Azure as a VM.

We have some OLAP Tabular models that we would like to expose to our customers (who are not members of our Azure AD domain)

Currently, you can install the Enterprise Gateway on your VM running SQL 2016 SSAS Tabular and point your PowerBI to Direct Query to your On-Premise (in this case it's a VM in Azure joined to Azure AD). Everything works fine, but the feature in PowerBI where you can click "Analyse in Excel" doesn't support datasets that are running DirectQuery through the Enterprise Gateway.

So to give the option for our customer to access the tabular model using, PowerBI, Excel, Reporting Service, Datazen what would be the best approach ? We would prefer to be Cloud Only, so Azure VM running SQL Server 2016 joined to Azure AD

Create Calculated Measure with MDX and Where Clause

$
0
0

Hi,

I'm having trouble with the mdx concepts and how to create new calculated measures with it.

My problem is:

I have a table [FactSales] with [Sales Amount] and a corresponding [SK_Currency] and [Sales Date]. On the [FactCurrency] table I have [FK_Currency] and [DateOfCurrency] and [ExchangeRate].

How can I create a calculated measure that for the date on the [FactSales] fetches the [ExchangeRate] and divides [Sales Amount] for it.

Viewing all 14337 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>