Quantcast
Channel: SQL Server Analysis Services forum
Viewing all 14337 articles
Browse latest View live

Get latest date price (Get value based on max of date dimension)

$
0
0

Hi,

I have a measure group product price, the measures are

1. Product Id

2. Price

3. Date

I have mapped the product id and date with product and date dimensions.

I need to get the latest price of the product.

For example:

Prod1, 100, 1/1/2015

Prod1, 300, 2/1/2015

Prod1, 250, 3/1/2015

Prod1, 150, 10/1/2015

So, i need the latest price. The latest price is 150.

I new to MDX query. Could you please explain me the MDX query to achieve this?

Thanks,

Hari


Getting error while executing the XMLA Script for SSAS Cube.

$
0
0

Hello ,

Getting below error while executing the XMLA Script for SSAS Cube.

The JSON DDL request failed with the following error message: Error executing a JSON script. Returned error: Error during a file operation

how to resolve it?

Thanks

error processing a partition using query binding partition

$
0
0

I am using SQL 2014 standard edition, with multi dimensional cube on analysis service.

The cube has about 20 measures and 60 dimensions and takes 3 hours to process successfully. I am trying to reduce the processing time so I set up query binding for all measures instead of default table binding. After this change, I processed all dimensions successfully using Analysis services processing task, but it failed on processing the measure with the following errors copied below. There is no other change on the cube except creating a partition on the measure using query binding. 

If the cube was processing fine earlier, why is it throwing errors after creating query binding partition.

Error messages:

[Analysis Services Execute DDL Task] Error: Internal error: An unexpected error occurred (file 'pcprocbinding.cpp', line 1028, function 'PCDBTableCollection::FindOptimizedColumn').
[Analysis Services Execute DDL Task] Error: Errors in the OLAP storage engine: An error occurred while processing the 'View My Measure' partition of the 'My Measure' measure group for the 'My_Cube' cube from the My_DEV Copy database.

Also, please let me know what other steps can be taken to reduce processing time.

Thanks for your help.


SSAS - Context filter in Tabular measure

$
0
0

I have a snapshot fact table (called JobDaily) that records each employee's status for every day of their employment.

I have a measure (called Average Years Per PRI) that computes the average time each employee spent at each classification level. So for example, if Joe spent 2 years as an EC and 1 year as an AS, and Frank spent 4 years as an EC and 3 years as an AS, the average time some spends as an EC is 3 years and the average time someone spends as an AS is 2 years.

I'm building a dashboard that needs to compute this average based on whatever selection of people the user chooses. By default, this average should be based on the currently active employees. In other words, compute the averages considering only the people who have a record in JobDaily where the snapshot date is equal to TODAY().

However, the user might also want to further narrow the set of employees down to those who are also currently in cost centre 12345. Or the user might want to compute the averages based on employees (current or not) who ever set foot in cost centre 12345.

The dashboard has some other features: for example, it needs to show the number of people in each year segment (i.e. 0-2 years, 2-5 years, 5-9 years, 9+ years) for each classification. Whatever filters are applied to the averages above, the person counts by segment must also reflect the same filters.

I've been thinking about page filters on the dashboard as a way to control the filter for all visuals on that page; however, it also causes the average calculation to only consider today's date which totally defeats the purpose. I tried changing the context by using ALLSELECTED and removing the date filter that is applied on the page but that didn't seem work.

Here are the measures I currently have:

Average Years Per PRI:=AVERAGEX(
SUMMARIZE(JobDaily, Person[PRI], "YearsPerPRI", [Count of Days]/365), [YearsPerPRI]
)

Number of People at Year Level:= SUMMARIZE(
FILTER(
CROSSJOIN(YearBins, SUMMARIZE(
JobDaily, 
Person[PRI], 
"YearsPerPRI", 
[Count of Days]/365)
), [Min] <= [YearsPerPri] && [Max] > [YearsPerPri]
), "PersonCounts", COUNTROWS(Person)
)

I'm interested in hearing some thoughts others might have about how to approach this problem. Seems I could tackle it a number of different ways. And I have a feeling the way I'm currently choosing to tackle it might be the harder way.

SSAS Tabular Model Older 1103 compatibility level for SQL Server 2014 - Unable to figure out how to "refresh" column names in table

$
0
0

key piece being doing this without breaking all the measures and without preferably just re-importing the entire table (which would mess up the partition GUID used in the SQL Agent job that processes the partitions in a sliding window fashion).

It seems I've tried everything - it's a partitioned table so there's 2 places in the model designer to change the SQL - in the partition manager and in table properties. I tried creating a view with the EXACT same names except for the few columns where there's a small name change.

*I am using aliases because there's a business name which is what the measures in the tabular use and then there's an underlying view name in the database but both are the same as the old names for most of the columns there's only a few columns that change in both the alias and the view*

It seems like using aliases in model designer for each column isn't working for some reason it's not picking them up? I can't see for exam in grid view the old names that haven't changed some are missing and some appear but break the corresponding measures that use them.

Is there any way to refresh this metadata in the tabular model without having to go through all the work of re-importing the table and copy pasting each measure? I was hoping I could just replace the partition manager / table properties SQL and be done with it. There are literally about 80 measures here and that would be a ton of work.

Thanks so much


how to get the list of user using cube

$
0
0
Hi I want to get list of users who access cube regularly. with my very limited understanding I was thinking to run Profiler on server for few days to get the list of users. is this the right option. If yes please could you sugest me any tutorial link  for this. and if not then using what other ways i can use . Please remember I dont want the current users sessions connected to server, that i know i can do using DMVs. 

Regards, Shanu

how to get the list of user who is using cube frequently

$
0
0
Hi I want to get list of users who access cube regularly. with my very limited understanding I was thinking to run Profiler on server for few days to get the list of users. is this the right option. If yes please could you suggest me any tutorial link  for this. and if not then using what other ways i can use . Please remember I dont want the current users sessions connected to server, that i know i can do using DMVs. 

Regards, Shanu


ODBC option missing in SSAS Tabular Model

$
0
0

I have downloaded the latest version of SSDT i.e. 14.0.61709.290

In this version of SSDT, I found that ODBC option is not available for SSAS Tabular Model.

Steps:

1) Create new Tabular Model project.

2) Select option Model >> "Import From Data Source"

3) Select option "Others (OLEDB/ODBC)" in "Table Import Wizard" window and click on "Next" button.

4) Click on "Build" button on the screen that is open.

5) Till previous version, we were having option for selecting ODBC DSN but in this version, we don't have any such option. We only have option to select OLEDB driver.

I have gone through the changelog of this version of SSDT and nowhere it is mentioned that ODBC option is removed from this version.

Link for changelog and setup of this version of SSDT is :

https://docs.microsoft.com/en-us/sql/ssdt/changelog-for-sql-server-data-tools-ssdt

Below screenshot is of the screen that opens when Build button is clicked on Step Number 4.

Kindly help!!! Thanks!!!



Cross filtering not working in SSAS tabular direct query model 2016

$
0
0

I have created one tabular model cube on 2016 version and using direct query mode. I have applied row level dynamic security in model which is working fine however filtering applied on related table not propagating to other related table. 

Here is how my design looks like.

UserMapping >>> (Both Side M:1) Customer >>> (1:M) Sales

dynamic security works fine on Sales and Customer table however when i apply any filter on Customer Name and selecting Sales details like, Invoice number, Sales Agent Name. Filter applied on customer dimension is not propagating to sales and it is not filtering the sales data. Which looks very weird to me. Is it a limitation with Microsoft or i am doing something wrong in setting some properties. If there is a limitation what will be the workaround. Please response as it is blocking me.

The weird thing i observed is when i drag the measures from sales then this filter propagation works but if we only select attribute from sales then it doesn't apply the filter.


Thanks, Shishupal

SSAS Excel Pivot Table : multiple filters are not dynamic / showing irrelevant values

$
0
0

Dear all,

We are new to the SSAS world and we are testing Tabular withExcel Pivot Table

- I have dimension "Country" with 8 Countries

- I have a dimension "Account" with a lot of accounts numbers (accounts numbers are specific for each country)

When I filter on countries = CHINA (for example) then the account filter is not dynamic, means it displays all the accounts of all countries and not only the accounts of China => Our users wants to see only the accounts of CHINA in the filter.

We found that using an Excel slicer instead of the classic filter do the trick but we want have the same behavior directly in the Excel filter.

Do you know if it possible ?

thanks!

SSAS Tabular cube deployment using XMLA/JSON script

$
0
0

Hi,

I created a Tabular Model in VS 2015 and database version is 2016. I have deployed the cube to my Dev server from Visual Studio (right click, deploy to server)

Now I would like to take this cube to my SIT and UAT environments. When I generate the cube script using Create/Replace To, and use this in SIT server, it works fine. But every time it replaces the cube. With this approach, I am losing all the refreshed data that had happened in SIT. Where as my actual changes were only in schema, like column hidden property etc. For these sort of changes, losing entire data in SIT and replacing is not making sense to me.

Similarly in PROD if we lose history data, it would take long time for us to re-process the cube in PROD.

Is there any way that we can just deploy schema changes alone in the versions I am using?

Thanks

Manoj

SSAS Cube, to create Folders and Dimesnisons.

$
0
0

In SSAS Cube, I have dimensions to create and show the output in Excel.

I should show them according to Each department or Type

For ex :

1.0  Parent Item

1.1  Dept Number

1.1.1.       Full Dept Number

1.1.2.       Type

1.1.3.       Year

1.1.4.       Sr Number

1.2.     ID

1.3.     Definition

1.4.     Budget

 

2.0  Accounts

2.1  Investor

2.1.1         Name

2.1.2         Location

2.1.3         City

2.1.4         State

2.2  Information

2.2.1         Dept Name

2.2.2         Dept Branch

2.2.3         ………………………………

3.0  Details

3.1

    3.1.1.

    3.1.2

 

3.2

3.3

 

……………………………………………….

I can build Dimensions for the fields in the view, but how do I create

Folder Name like Parent Item, 2.0 Accounts, 3.0 Details…..

 

Can anyone help me out how to achieve this?

 

I have all the above columns in a single view, except for Folder name’s like

1.0  Parent Item, 2.0 Accounts, 3.0 Details….. and

1.1  Dept Number, 2.1 Investor, 3.0 Details……………..

 

Sincerely appreciate your advices……

 

The data source view does not contain a definition for the 'dbo_view_myview' table or view.

$
0
0

Using SSAS on SQL2014 standard edition.

I am trying to create a query binding partition on one of my measure groups by selecting a subset of data ( example query:Select <column list> from FROM [dbo].[View_MyView] WHERE [View_Myview].[timeid] >= 20170701) There is only one query, so no data overlapping between partitions.

The view shows up in the DSVand I also did explore data. Why ami i getting this error when I deploy and process this partition.

"Errors in the high-level relational engine. The data source view does not contain a definition for the 'dbo_view_myview' table or view. The Source property may not have been set"

It processes fine if I use table binding.

Another thing I noticed is that when I set the partition with query binding, I do not see DSV as data source. It only has the relational ds option, while when I set up table binding partition it shows both relation and dsv data sources and the table binding partition works with dsv, not relational ds.  Is there a way to setup query binding partition with dsv?

Thanks in advance.


SSAS issue after upgrade to SQL2016

$
0
0

Hi all,

We have a cube that has been migrated from SQL 2005 to SQL 2016 that is causing issues. When a full load is completed via the ETL the cube works perfectly. But when an incremental load of the data is completed the cube throws up issues thinking that the incremental is a standalone day.

I have checked the underlying views and they are identical as are the date bridges, am losing the will to live has anyone come across this before ?

Excel is showing only 2,048 characters of an Analysis Services attribute

$
0
0

Hi guys,

I am working in a tabular SSAS model at this moment. For some reason, the user wants to see 32k characters for a tabular attribute. That affordable by SSAS acordingits limits (64k).

When I query tabular directly using either DAX or MDX I can see the 32k data size. However, when I go to excel, the same record shows only 2,048 characters in a Pivot Table. Is there a way to fix this on Excel?

Regards.

Lawrence


Lawrence Carvalho


Cannot Process SSAS Tabular Models with an SSAS-Based Data Source with Analysis Services Projects Extension v2.8.17

$
0
0

After the Analysis Services Projects extension updated to 2.8.17, my colleagues and I receive the following error when we try to add or process an SSAS model as a data source to a new or existing model with 1400 level compatibility:

Could not load file or assembly 'Microsoft.PowerBI.AdomdClient, Version=15.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A)

Previously we were able to fix this by installing the latest version of SSDT but this seems to be caused by the latest version of the extension. With VS2017 we can rollback SSDT to an older version but with VS2019 there does not seem to be a way to rollback or downgrade the extension. What is the best way to fix this issue?

Connect to SSAS\tabular cube with a nondomain user

$
0
0
Hi to all, I'm trying to connect to a SSAS\Tabular cube with a nondomain user, I cannot find the way to define, if possible, a login not tied to a domain user. I've tried to run excel with runas /user:MAINDOMAIN\FABRIZIOC "\"C:\Program Files (x86)\Microsoft Office\Office15\EXCEL.EXE\" /r \"C:_fabtemp\MyTestSSAS.xlsx\"" excel opens but in excel I always see the old user in the right upper corner instead of FABRIZIOC. Can you help me to identify a solution ? Many thanks to all Fab

MDX Dynamic set Performance ISSUE

$
0
0

Hi there,

in my cube I've created a dynamic hidden set as ='[Shop].[Shop].[Shop].members' that is used in all my formulas.

but it is very slow when evaluated in a formula. So I decided to delete it and use [Shop].[Shop].[Shop].members in the formula. It's very fast but the total doesn't work if browsing cube you filter a shop or a set of shops.

My formula is:

([Comparison].[Comparison].&[AVG])=

CASE WHEN IsEmpty([Comparison].[Comparison].&[CAT]) 
     THEN NULL
ELSE
              CASE 
              WHEN ([Measures].CurrentMember IS [Measures].[Nr shops])
                THEN ([Comparison].[Comparison].&[CAT],[Measures].CurrentMember)
              WHEN ISLEAF([Shop].[Shop]) 
                THEN ([Shop].[Shop].CurrentMember,[Comparison].[Comparison].&[CAT],[Measures].CurrentMember)
                SUM(EXISTING(NONEMPTY([Shop].[Shop].[Shop].members)),(([Comparison].[Comparison].&[CAT],[Measures].CurrentMember)
                                            *
                   ([Comparison].[Comparison].&[CAT],[Measures].[Working days])))
                  /([Comparison].[Comparison].&[CAT],[Measures].[Working days]) 
              END
 
END

Can you help me?

Thanks in advance

Unique identifier column in SSAS tabular model direct query mode creating huge performance issue.

$
0
0

Hi All,

I am working to create a SSAS tabular model using direct query mode. And in our database structure we are using all Primary Key columns as UniqueIdentifier. Now if we import the same structure in tabular model direct query model it gets imported as Text datatype however when i perform the slicing and dicing i observed that the SQL query tabular model engine is generating during explicit conversion to Nvarchar(max) for these Uniqueidentifier column due to which i am facing huge performance issue.

Can somebody help me how to tackle this. Do we have any functionality to handle in SSAS tabular model or any work around that you guys can suggest in order to fix this performance issue.

Will there be any workaround at SSAS Tabular model apart from explicitly converting this column at database level in order to handle this conversion on runtime.

Please do share your ideas, it will help me a lot.


Thanks, Shishupal

Infinite recursion detected... when using ParallelPeriod.

$
0
0

I have a requirement to perform a calculation on the last 4 months of data, and I need to perform a check on month individually, however the challenging part is our Time dimension includes an addition "ADJustment" member at the Month level as the final EoY member, so that hierarchy and members look something like this:

[2019]
  [2019.Q4]
    [2019.JAN]
    [2019.FEB]
    [2019.MAR]
  [2019.Q4]
    [2019.APR]
    [2019.MAY]
    [2019.JUN]
    [2019.ADJ]
[2020]
  [2020.Q1]
    [2019.JUL]
    [2019.AUG]
    [2019.SEP]
...

For closing off the FY, some measures use the [YYYY.JUN] member and others use the [YYYY.ADJ] member as it includes data updated once the year is closed off.

I'm need to perform calculations at the month level and dynamically look at the value for the last 4 months, however need to handle the crossover of FYs and use [YYYY.JUN] instead of [YYYY.ADJ]. What would the best way of doing this?

I tried the following 2 measures:

MEMBER [Measures].[Sales LM1] AS  
	IIF(PARALLELPERIOD([Time].[H1].[MONTH], 2,[Time].[H1].[2019.AUG]) = PARALLELPERIOD([Time].[H1].[MONTH], 2, [Time].[H1].[2019.AUG]).PARENT.PARENT.LASTCHILD.LASTCHILD,
		-- myMeasure back 3 months to avoid ADJ PARALLELPERIOD([Time].[H1].[MONTH], 3,STRTOMEMBER('[Time].[H1].[2019.AUG]')),
		,
		-- myMeasure back 2 months as per normal PARALLELPERIOD([Time].[H1].[MONTH], 2,STRTOMEMBER('[Time].[H1].[2019.AUG]')),
	)

MEMBER [Measures].[Sales LM2] AS  
	IIF(PARALLELPERIOD([Time].[H1].[MONTH], 3,[Time].[H1].[2019.AUG]) = PARALLELPERIOD([Time].[H1].[MONTH], 3, [Time].[H1].[2019.AUG]).PARENT.PARENT.LASTCHILD.LASTCHILD,
		-- myMeasure back 4 months to avoid ADJ PARALLELPERIOD([Time].[H1].[MONTH], 4,STRTOMEMBER('[Time].[H1].[2019.AUG]')),
		,
		-- myMeasure back 3 months as per normal PARALLELPERIOD([Time].[H1].[MONTH], 3,STRTOMEMBER('[Time].[H1].[2019.AUG]')),
	)
The first one "Sales LM1" works (so far) and skips the "ADJ" month to return "JUN" values. However, "Sales LM2" gives an error, "Infinite recursion detected. The loop of dependencies is Sales LM2 -> Sales LM2."

Can someone help explain what is going on and a better way to solve this?

Thanks, being Friday afternoon here I'm done until Monday, so will check then.

Viewing all 14337 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>