Quantcast
Channel: SQL Server Analysis Services forum
Viewing all 14337 articles
Browse latest View live

How to sum a measure value in MDX?

$
0
0

I have to create a measure which will be the sum of another measure. But in my output, I'm getting the count of the rows

Below are the MDX queries used

     Distinctdealer = Distinctcount(dealer])

   CREATE MEMBER CURRENTCUBE.[Measures].[Helper]
     AS IIF([Measures].[Netsales]<=0,NULL,[Measures].[Distinctdealer]);

   CREATE MEMBER CURRENTCUBE.[Measures].[Totalcount]
   AS SUM([Measures].[Helper]);

Attached images show the result of each measure. The expected output for the Totalcount(Measure) should be 2. But couldn't achieve it. Could anyone please help me with this?




Calculated measure with COUNT of set works in MDX, but not as measure in cube with dynamic filter?

$
0
0

I am trying to calculate COUNT number of customers that bought an item, grouped by item category. <br>
I think I have managed mdx part of it, but when I create dynamic set and member in cube, it does not work as intended... I always get 1.
this is my MDX: [Measures].[Sold] is on lowest level SKU level and can be only 0 or 1

with set CC as filter( [CUSTOMERS].[Customer].[Customer] , [Measures].[Sold] > 0 )
member measures.C as 
	iif(
		count(nonempty( CC, [Measures].[Sold])) = 0, null, 
		count(nonempty( CC, [Measures].[Sold]))
	)
select {[Measures].[Sales], measures.C } on 0,
non empty 
[MATERIALS].[Brand].allmembers 
on 1
from (select ({	[CUSTOMERS].[Customer].&[1],
				[CUSTOMERS].[Customer].&[2], 
				[CUSTOMERS].[Customer].&[3]}, 
				[PERIOD].[Period].&[2019002]) ON 0 FROM CUBE)

I get results as expected, calculated measure C counts 3 customers bought at least one product from category 1, 2 customers from category 2, etc.

----------   Sales	C
All.......... 78	3
CATEGORY.1....24	3
CATEGORY.2....18	2
CATEGORY 3.....9	2
CATEGORY 4....24	2
CATEGORY 5.....3	1


now, when I create dynamic set and new member in cube script, with exactly the same logic, I always get count=1... I do not get it, and it drives me nuts. I used the subcube because my main goal is to have that measure and cube ready for excel.

create dynamic set currentcube.[CC] as 
 filter([CUSTOMERS].[Customer].[Customer], [Measures].[SOLD] > 0 );

create member currentcube.[measures].[C_CUBE] as
iif(
count(nonempty([CC], [Measures].[SOLD])) = 0, null,
count( nonempty ( [CC], [Measures].[SOLD] )) );

When I run the same MDX along with new measure, or without MDX gives me expected results, new measure member always returns 1, like if teh dynamic set had only one item....(?)

----------   Sales	C	C_CUBE
All.......... 78	3	1
CATEGORY.1....24	3	1
CATEGORY.2....18	2	1
CATEGORY 3.....9	2	1
CATEGORY 4....24	2	1
CATEGORY 5.....3	1	1

Could someone perhaps point in correct direction?

Thank you


SSAS Tabular model - full table processing takes 9 minutes to begin processing the partition

$
0
0

We have noticed something strange with our main Tabular model when using a calculated table that contains an expression (not just an alias of another table). 

For example: The table 'Currency Codes' is a calculated table based on: SUMMARIZECOLUMNS ( 'Currency Rates'[CurrencyCodeKey] ).  You can see that this new calculated table is based on another table called 'Currency Rates', which just loads rates from a SQL table.  When we perform a "process full" on 'Currency Rates', it takes 9 minutes to even begin processing the partition. Once it starts processing the partition, it only takes a few seconds to load the rates. 

We can't figure out why the system takes 9 minutes to start processing the partition.  Doing a full trace with the SQL Profiler does not reveal any activity with my username or the database until after the 9 minutes.  The CPU is at 0% and the memory isn't moving.  If we remove the calculated table from the model, the 'Currency Rates' table processes instantly. 

This is also affecting our "process full database" operation.  It takes 9 minutes to even start processing any partitions in the database.  

It's like it's doing some schema validation operation or something.  Does anyone have any suggestions on how to locate the problem?


We are using compatibility level 1400, SSAS version 14.0.2.451.

Thanks for any help. 

How to get my account verified?

$
0
0
 I need to upload a file and get error "Body text cannot contain images or links until we are able to verify your account".  Could somebody please verify my account as soon as possible.

Testing calculated members in cube browser

$
0
0

Dear SQL,

I am new to MDX and cubes as a whole so pardon my question. I am trying to add a distinctcount of orderid's so that I can see the orders per customer. The orderid's are in a supplier dimension where the customers are in a seperate dimension. 

When I add a measure directly in the cube structure, everything works fine and I am able to see the amount of orderid's per customer. However, when I go and add a calculated member, with DISTINCTCOUNT ( [Suppliers.OrderID.Members] ) it does not work. 

What is the difference in the calculation which Visual Studio generates and mine? 

Kind regards,

Michiel

Project Burndown help

$
0
0


I'm trying to make a burndown report and here is some sample data. I'm having trouble getting what I'm going for (I have a model in ssas, this is just data in pbi because it was easier to sort this way).

As you can see through the versionWeek column, the initial plan was created and then was altered 7/28. 

Here is what I'm trying to get. Everything is summed when it comes after the week start but before the next version of a certain week comes out. So looking at the sample data the 7/14 version week hoursPlanned area all summed until 7/28 when the new version of the last couple weeks shows up.

The "Burndown" column is my goal here.

Thanks in advance.

The size specified for a binding was too small, resulting in one or more column values being truncated.

$
0
0

Hi All

When processing the cubes i'm getting the following error

The
size specified for a binding was too small, resulting in one or more column
values being truncated.

I checked the data size under key column and name column and compared to the database. All fields are matching but i'm still getting the error. Please need advice.

Another question I have is can the datasize of the name column be greater than the Key column datasize? or should the sizes be the same?

After looking every where in the DSV I found that the data types for the FACT table have Data Size of -1 , I TRIED MANY WAYS to edit he Data Size and couldn't find how to do it . Please need help


SV



Show One value in all the rows in a single column

$
0
0

Hi 

I have a data in following format in my tabular model

C_IDT_DateCapacity
18/2/201910
18/3/201920
18/4/201930
18/5/201940
18/6/201950

Need to create two derived columns using DAX,which will show data in the below format

C_IDT_DateCapacityLeast DateLeast Capacity
18/2/2019108/2/201910
18/3/2019208/2/201910
18/4/2019308/2/201910
18/5/2019408/2/201910
18/6/2019508/2/201910

Thanks in advance..


SSAS Cube : De-duplication of facts

$
0
0

Hi,

I have a question on offsetting some fact values from the cube based on business rules..

Scenario : 

I am getting sales figures for a Product from 2 different sources, with 2 different names (P1, P2). Now, the system treats them as separate products, as they need to be seen by different groups of users.

However, for a person looking at the Overall Sales figures, the values double up (as its the same product actually).

Hence customer now wants to offset the sales figure somehow, so that the figures appear correctly at Product Level, but one of them is ignored when the cube is aggregated at Company Level.

One work-around is to generate as set of -ve figures as a 3rd product (say, P3).

This solves the aggregation at Product and Company Level.

But, the "P3" product is also visible on the cube with all -ve sales figures, which needs to be hidden cosmetically (but used in aggregation).

Is there a way to do this?


Please use Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful.

MDX - How to SUM a calculated measure - only if value is greater than a given value say 1000

$
0
0

Hi

I have a measure group with an [Amount] - currency measure.

I want to create a calculated measure to SUM the [Amount] - but ONLY those where the individual [Amount]'s are greater than a specific value such as 1000.

How can I build a measure to SUM with an IF statement?


I.W Coetzer

Unable to connect to a data source using excel 2013

$
0
0

We are trying to access a sql cube by creating a new connection using excel 2013.

The error message received is "Unable to connect to data source.Unable to locate database server.Verify that the database server name you entered is correct,or contact the database administrator for help."

We are using a 64 bit machine and tried everything possible including reinstalling ms-office. All other connections work but user is unable to create new connection

The problem is related to using analytical services from microsoft excel 2013

Project Burndown cont.

$
0
0

I previously asked this quesion which didnt quite solve my issue:

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/842f501a-9062-48dd-97de-26f8a3586ac9/project-burndown-help?forum=sqlanalysisservices

Like I said in that thread, the goal is the total the hours that have been planned and have them drop off the graph as the week goes by. Along with altering that plan if the plan is altered based on the week in which is was altered. I'm going to give three examples of sample data and expected result (Shown in pbi and excel, but I want the solution in my ssas model - just easier to display data this way).

(The right images will be the basis of my chart -Week Start being the x-axis and Burndown being the values).

Example 1: 

The first two weeks are calculated with the 10's until the plan is altered on the week of the 28th. 13's then replace it.

Example 2:

Left image 4th column is the planGroup. As you can see this example has two of them. Both created on 7/14, but the second one was altered on the 28th and again 11th.

Example 3:

This one is a bit different because the planGroup ending in 8096 has those 0 entries on the 11th. This is because the plan was altered to no longer include the current week (which is the 11th). At the same time 2 new planGroups were made for future weeks.

I'm thinking we will need to due something with the PlanGroups in order for this to work properly, but I guess I'm not sure.

Any help would be appreciated. Thanks.

SSAS or SSRS running query in parallel on UAT but in serial on Prod

$
0
0

I am not sure if this is an SSRS question or SSAS question.  I have a report in SSRS with about 8 parameters that are tied to DAX queries that have as their main parameter the value of another parameter (Param1).  I select a value for Param1 and SSRS goes to the SSAS server to load available values for the 8 other parameters.

On UAT, this process takes about 3 seconds.  On Prod this process takes 8 seconds.

Some things that I observed:

Each query that runs on Prod is as fast or faster than the equivalent on UAT, the duration in the trace that I ran showed durations that were the same or less on Prod than UAT.

What I found interesting is that on Prod only one query runs each second.

On UAT I found that up to 4 queries ran in one second.

When I look at the Properties of the UAT SSAS server the properties are the same as the Prod SSAS properties.

What could be causing this difference in behavior?

Both servers are SQL Server 2016 Enterprise

Prod is build 13.0.5337.0

UAT is build 13.0.5270.0

Any suggestions on what to look for?


Russel Loski, MCSE Data Platform/Business Intelligence Twitter: @sqlmovers; blog: www.sqlmovers.com

How to find the last processed datatime of SSAS cube?

$
0
0

Hi,

I need to show the SSAS cube last processed datetime on SSRS report and I'm using below query to find the same but recently I've observed from my own testing that this gives you the database last update, which is not the same as the cube last update.

SELECT * FROM $System.MDSCHEMA_CUBES

Will appreciate if someone can provide the exact way to find the SSAS cube last processed datetime.

Thanx,
Atul Sharan


DAX: Last Non Empty per date (Most Current Inventory)

$
0
0
Here is a little context.  I have product subcategory as row label.  I have a daily snapshot of inventory and using count of inventory as the measure.  The measure I want to be sure that I return the latest dated inventory when I don't specify a date as the column label.  But if a date is specified then I want to be sure to show that dates inventory in which the cell relates.  Inventory from day to day can go to 0.  So not everyday a subcategory has inventory. I have tried using LastNonEmpty to return the last date.  But what happens is one subcategory is from today and the next subcategory is from the previous day that did have inventory for that date.  I have been able to create a measure that returns the max date for which the column pertains.  Here is that formula.  CALCULATE(MAX([Business Date]), ALLEXCEPT('Inventory', 'Date')) 

What I cannot seem to figure out is how to put that into the filter context for the inventory count so I return inventory which only pertains to that date.  Any and all help is appreciated!


Does row level security works if null value found in cube model ?

$
0
0

I have created one sales cube which has country wise sales table and country table and security group table like below. 

SALES TABLE:

    country     sales_amount    security_group_id
     India             50000          S1
      UK               50000          S2NULL50000          S1

COUNTRY NAME:

    country  security_group_id
     India       S1
      UK         S2

SECURITY TABLE:

  User_name  security_group_id
    ABC               S1
    XYZ               S2

I am trying to restrict the data country wise,Here I got some issue what if country column has null value and how to handle row level security.

Can MDX be used for this?

$
0
0

Hi Team,

I have not used MDX before and, although reasonably good with DAX, I am struggling with the syntax.

My original request is here:

https://www.mrexcel.com/forum/power-bi/1106911-can-mdx-used.html

Can anyone off any advice, please?

Thanks,

Matty


MDX Cumulative Count - With Repeating Numbers?

$
0
0

Hi

I am trying to build a cumulative count, however it is returning a value where it should not?

In the screenshot below you will see that the 'green' cells are the only results I want to see - but for some reason when my original measure moves to another category Plants to Animals - the cumulative measure is still counting an entry for the previous category Plants? How can I change this MDX to only return a result where the dimension category is the same as the previous / last known category?

CREATE MEMBER CURRENTCUBE.[Measures].[Count]
 AS 

SUM({[Period].CurrentMember.Level.Members}.Item(0):
[Period].CurrentMember,[Measures].[Contract Count])
, 
VISIBLE = 1;


I.W Coetzer

Designing tables for a report, dimensional or OLTP model ?

$
0
0

Hi,

   I got a vague requirement from the client and wondering if I should design that in a Analysis Services. However, I'm new to the SSAS. So please advice if I should build dimensional, fact tables and cube to build this report or just a OLTP table and implement Slowly changing dimensional concept to load the data.

If dimensional, should I build one dimensional table per each business vertical and one fact table for all dimensions or one fact table per each dimensional table.

Report:


RLS in SSAS 2017 Tabular - Not ready for prime time?

$
0
0

We have a Tabular model that's complex but not big. Over 100 tables but less than a 10GB memory footprint. Fact tables and dimensional tables in a Kimball dimensional model. When accessing the model through a role with no RLS defined but OLS defined on some sensitive HR related tables, a fairly complex DAX query against a large financial table performs quite well, in under 300ms when tested with DAX Studio. Adding a single RLS condition on a dimension not involved at all in the query to that role degrades performance by nearly 50 fold and total execution time climbs to greater than 12000ms. Removing that condition and performance improves again down to the 300ms range. Additionally, adding a RLS condition to any table/attribute in the model whether that condition would impact results in the test query results in the same 50 fold degradation in performance.

Has anyone else seen the same degradation in performance with RLS in a SSAS 2017 Tabular model? Applied CU 16 but it did not resolve the issue. Should we open a support case with Microsoft? Currently, we have requirements to secure data by lines of business and the current RLS performance is a show stopper.

Martin


Martin Mason Wordpress Blog

Viewing all 14337 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>