Quantcast
Channel: SQL Server Analysis Services forum
Viewing all 14337 articles
Browse latest View live

Data masking for dimension attribute based on user in SSAS 2014 multidimensional

$
0
0

Hi there,

I am trying to implement data masking based on user login and not sure why this is not working. I have the dimensions DimBrand, DimProduct and DimUser. I should mask the BrandCode with 'XXXX' nothing but in the report all the BrandCode should appear but few of the code will be masked if the user is not belongs to that group. I have a fact table FactProduct in this. In the cube I created all these 3 dimensions and the fact table. I created a new dimension DimBrandMask and I separated the code over there with a relationship with the actual DimBrand dimension. In the cube a reference relationship is set up with the measure group. Created a role with read access.

In the dimension data tab of role I put the below MDX to allowed set.

NonEmpty([DimBrandMask].[Brand Code].Members, (StrToMember("[DimUser].[Login Name].[Login Name].[" + UserName() + "]") ,[Measures].[Dim User Count]))

And in Denied Member set i put the below MDX

IIF( (StrToMember("[DimUser].[Login Name].[Login Name].[" + UserName() + "]"), [DimUser].[Access Right].&[False]), NONEMPTY( [DimBrandMask].[Brand Code].Members,(StrToMember("[DimUser].[Login Name].[Login Name].["+ UserName() + "]"), [DimUser].[Access Right].&[False], [Measures].[Dim User Count])),{})

Note I created one measure group from the DimUser table and the measure [Dim User Count] is used in the above query. Just trying to figure out what is going on wrong here.

I am expecting some result like below

Brand      BrandCode           Count

Brand1      b1                       6

Brand2     XXXXX                  5

Brand3     XXXXX                 10



Thanks In Advance!

Palash


DAX ROWNUMBER Measure

$
0
0

Hello,

I was trying to calculate a measure to get the same has the rownumber function... 

Basically I need a non repeatable and increasing number based on a certain sort of columns.

Is this possible to achieve with DAX? I tried RANKX but it does not work when I have repeating values on my ranking column... 

Thank you very much.


Luis Simões

sets without aggregations after crossjoining and implicit aggregate

$
0
0

Hi,

Is it possible to create a set that will contain calculated members that will be aggregated later after cross joining them with other calculated members without explicitly using aggregate on one of them? I'll give an example.

Let's say I have three dimensions:

products
-> products.hierarchy.product1
-> products.hierarchy.product2
-> products.hierarchy.product3

region
-> region.hierarchy.northern hemisphere
-> region.hierarchy.south hemisphere
-> region.hierarchy.outer space

season:
-> season.hierarchy.summer
-> season.hierarchy.winter
-> season.hierarchy.nuclear winter


Set 1: {
products.hierarchy.product1
,(
products.hierarchy.product2
, products.hierarchy.product3
) as otherProducts
}

Set 2: {
region.hierarchy.northern hemisphere
,(
region.hierarchy.south hemisphere,
region.hierarchy.outer space
) as restOfSpace
}

and then in excel print something like this
|Set1
---------------------------------
Set2*Season| Some Measure X

which would result in:
| product1 otherProducts
---------------------------------------------------------------------------------------------------------------------------------------------
northern hemisphere x summer| aggregate(existing(measure.X))aggregate(existing(measure.X))
northern hemisphere x winter| aggregate(existing(measure.X))aggregate(existing(measure.X))
northern hemisphere x nuclear winter| aggregate(existing(measure.X))aggregate(existing(measure.X))
restOfSpace x summer| aggregate(existing(measure.X))aggregate(existing(measure.X))
restOfSpace x winter| aggregate(existing(measure.X))aggregate(existing(measure.X))
restOfSpace x nuclear winter| aggregate(existing(measure.X))aggregate(existing(measure.X))


Rafal

MDX - Get maxiumn value of dimension keycolumn

$
0
0

Hello,

A dimension(DimProduct) having one dimension attribute which have NameColumn is ProductDescription and KeyColumn is ProductId column referred.

I wanted to get the maxiumn value of Key column of the dimension using MDX?

Thanks



MDX display all dimension records

$
0
0

Hi,

If I have a dimension with 3 records like this:

BrokerDimId  BrokerNm

1  Broker1

2  Broker2

3  Broker3

and I have a fact table that has 2 rows that join back to the first 2 dimension rows like this:

BrokerDimId  Amt

1  $100

2  $200

I want to create the mdx so that I get all of the dimension rows, even if there is no record in the fact table for it and just default the amount to 0, like this

BrokerDimId  Amt

1  $100

2  $200

3  $0

Is this possible using MDX?

thanks

Scott



Errors in the OLAP storage engine: The attribute key cannot be found when processing

$
0
0

Hello, 

I know this is mainly a design problem. I 've read that there is a workaround for this issue by customising errors at processing time but I am not glad to have to ignore errors, also the cube process is scheduled so ignore errors is not a choice at least a good one.

This is part of my cube where the error is thrown.

DimTime

  • PK (int)
  • MyMonth (int, Example = 201501, 201502, 201503, etc.) 
  • Another Columns

FactBudget

  • PK (int)
  • Month (int, Example = 201501, 201502, 201503, etc.)

I set the relation between DimTime and FactBudget doing DimTime MyMonth as Primary Key and FactBudget Month as Foreign Key.

The cube built without problem, when processing the errror: The attribute key cannot be found when processingwas thrown.

It was thrown due to FactBudget has some Month values (201510, 201511, 201512 in example) which DimTime don't, so the integrity is broken.

My actual question: is there a way or pattern to redesign this DWH to correctly deploy and process?

Thanks for considering my question.

Performance issue in Large Dimension in SSAS

$
0
0

Hi All,

I'm designing a report in tableau and the data source is SSAS cube and I'm facing some performance issue while loading huge size of dimension data. In my cube, couple of dimensions having more than 4 lakhs records that binding with four different measures. I have done the partitions and aggregation designs for yearly basis. But the large dimension having the page taking more than 20 seconds to bring the data.

Anyone can suggest me how to optimize the large dimension data.

Thanks and Regards

Baskaran R

Tabular Model -- Optimsation

$
0
0

Hi

I have a Tabular model, it has 8 tables , 7 of the tables process in 3 mins, but the main table is 22.3gb in size

I am processing the table on a 64 bit pc with 8 gb ram to the local instance from a Production server

At this rate it will take days to process the Main table 22.3gb, the main table was running off a view but I have inserted the data into TableA and updated the model

Only other optimization process I can think of is copying TableA to my local machine but the performance to the Server on the network is not an issue.

 

Any suggestions to improve performance?

I am just process the main table for the last 4 hrs and it is only at 11.7 mill records out of 124 mill total @ this rate it will take 44 hours

Size: 22342752 KB

CREATETABLE[dbo].[StagingTablevwKeyEventsfact](

      [PKID][bigint]NULL,

      [ManufactureGeoLocationCode] [nvarchar](14)NULL,

      [OEMExtendedID][nvarchar](16)NULL,

      [ProgramEligibilityValueDescr] [nvarchar](41)NULL,

      [ChannelRelationshipID] [nvarchar](32)NULL,

      [CountryID][int]NULL,

      [SoldToPCModelNumberKey] [int]NULL,

      [ActivationStatusSK] [smallint]NOTNULL,

      [SoldToSalesGeographyKey] [int]NULL,

      [LicensableProductKey] [int]NULL,

      [FulfillmentDateKey] [bigint]NULL,

      [ReturnDateKey][bigint]NULL,

      [BoundDateKey][bigint]NULL,

      [CBRReceivedDate] [bigint]NULL,

      [ActivationDate] [int]NULL,

      [Total Fulfilled1] [int]NOTNULL,

      [Fulfilled Returned1] [int]NOTNULL,

      [Reported On CBR1] [int]NOTNULL,

      [Bound1][int]NOTNULL,

      [Bound Returned1] [int]NOTNULL,

      [Blocked Before Activation Enabled1] [int]NOTNULL,

      [Activation Enabled1] [int]NOTNULL,

      [Activation Enabled Via Overide Exception1][int]NOT NULL,

      [Activation Success1] [int]NOTNULL,

      [Activation Failures1] [int]NOTNULL,

      [Activation Outstanding1] [int]NOTNULL,

      [Returned Post Activation1] [int]NOTNULL,

      [Blocked Post Activation1] [int]NOTNULL,

      [Total Returned1] [int]NOTNULL,

      [Total Blocked1] [int]NOTNULL,

      [Days Fulfilled To Bound1] [bigint]NULL,

      [Days Bound To Activated1] [int]NULL

)ON[PRIMARY]

 

 

 


Making Dateadd return EXACTLY one month before (calculation @ 29/3 30/3 31/3 returns value at 28/2)

$
0
0
Hi,

i've developed this calculation to use in a model where 'dim posting date' as an hierarchy year|month|date. It allows me to:

1) if a month is selected and no date is selected, it does the calculation for Month

2) if a month is selected AND dates are selected, it does the calculation for the day. (already changed to allow non contiguous dates)

3) Finally, other situation is only the year selected, and in this case i want to appear BLANK.

But now i have a problem, in 2), at 29, 30 and 31 of March i get the (last) value of 28 February. I don't want that, How can i change the calculation to make it exactly return the value for one month before. If the day doesn't exist, i want blank to appear.

(the "opposite" happens without problems: at 28 February i get the value of 28 January)

Sales Amount M-1 :=
IF (
    HASONEVALUE ( 'Dim Posting Date'[Month] )
        && NOT ISFILTERED ( 'Dim Posting Date'[Date] );
    CALCULATE (
        [Sales Amount];
        DATESMTD ( DATEADD ( 'Dim Posting Date'[Date]; -1; MONTH ) )
    );
    IF (
        HASONEVALUE ( 'Dim Posting Date'[Month] )
            && ISFILTERED ( 'Dim Posting Date'[Date] );
        SUMX (
            VALUES ( 'Dim Posting Date'[Date] );
            CALCULATE (
                [Sales Amount];
                DATEADD ( 'Dim Posting Date'[Date]; -1; MONTH )
            )
        );
        BLANK ()
    )
)

Analysis Services job step: The file could not be decrypted.

$
0
0
We have a daily SQL Server Agent Job that's failing intermittently.  Between May 27th and Jun 16th, it has failed twice.  The error message we receive (listed below) references "The file could not be decrypted."  The step, however, is an Analysis Services step that builds a cube; the message seems to have no relevance in this context.

We've been unable to locate any references online for this message that speak to anything other than Biz Talk or connecting to Excell spreadsheets, neither of which are in play here.

Message:
Executed as user: CORPORATE\sqlagent. Microsoft.AnalysisServices.Xmla.XmlaException: Internal error: The operation terminated unsuccessfully.The following system error occurred: The specified file could not be decrypted. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Data Warehouse', Name of 'Data Warehouse'.Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Time', Name of 'Day' was being processed.Errors in the OLAP storage engine: An error occurred while the 'Calendar Month' attribute of the 'Day' dimension from the 'CustomerExperienceCenter' database was being processed.Server: The current operation was cancelled because another operation in the transaction failed. at Microsoft.AnalysisServices.Xmla.XmlaClient.CheckForSoapFault(XmlReader reader, XmlaResult xmlaResult, Boolean throwIfError) at Microsoft.AnalysisServices.Xmla.XmlaClient.CheckForError(XmlReader reader, XmlaResult xmlaResult, Boolean throwIfError) at Microsoft.AnalysisServices.Xmla.XmlaClient.SendMessage(Boolean endReceivalIfException, Boolean readSession, Boolean readNamespaceCompatibility) at Microsoft.AnalysisServices.Xmla.XmlaClient.SendMessageAndReturnResult(String& result, Boolean skipResult) at Microsoft.AnalysisServices.Xmla.XmlaClient.Execute(String command, String properties, String& result, Boolean skipResult, Boolean propertiesXmlIsComplete) at Microsoft.SqlServer.Management.Smo.Olap.SoapClient.ExecuteStatement(String stmt, StatementType stmtType, Boolean withResults, String properties, String parameters, Boolean restrictionListElement, String discoverType, String catalog) at Microsoft.SqlServer.Management.Smo.Olap.SoapClient.SendCommand(String command, Boolean withResults, String properties) at OlapEvent(SCH_STEP* pStep, SUBSYSTEM* pSubSystem, SUBSYSTEMPARAMS* pSubSystemParams, Boolean fQueryFlag). NOTE: The step was retried the requested number of times (1) without succeeding. The step failed. 

Masking the dimension data in ssas based on role

$
0
0

Hi..

The requirement is to mask dimension data to “N/A” from certain users in SSAS Cube based on role. For example I have my Employee data with Employeedateofbirth, EmployeeName, EmployeeLocation in the Employee Dimension..

For Normal user should see like below:

Employeedateofbirth     EmployeeName              EmployeeLocation

01/01/2015                        Employee1                        UK                     

02/02/2015                        Employee2                        USA                      

 

For a restricted user, should see results in cube, excel and SSRS like below as per role..

Employeedateofbirth     EmployeeName              EmployeeLocation

01/01/2015                        N/A                                       N/A

02/02/2015                        N/A                                      N/A

This we have to Achieve in Cube, Excel and SSSR reports. Please kindly help on this..

Thank you..

SQL Server Tabular and Power BI in Azure?

$
0
0

Hi,

Summary: This post is about getting a complete Microsoft DW/BI (Tabular), Power BI and Office enviroment up running. I am a (Microsoft) DW/BI manager and want to learn about the latest Microsoft DW/BI (SQL Server Tabular) and Power BI (Power Pivot, Power View, ...).

To save time & access from anywhere, I thought I could use Windows Azure but quickly got lost in the Azure preview and market place! I even contacted the Windows Azure team and that didn't clarify things.

So, has anyone managed to get a SQL Server (2014) environment with all DW/BI (SSIS, SSAS, SSRS, Data Mining) technologies together with Power BI and Excel working? If so, can you please put me in the right direction to get started.

If this is not possible on Azure and it has to be done on Window, what do I need to install for the Power BI stuff (to work with Tabular)? I already have SQL Server 2014 and Office Pro Plus installed natively on my Windows machine.

Thanks in advance!


Having trouble learning Time Intelligence in SSAS 2012

$
0
0

Hi All,

I'm studing for exam 70-466 and need to learn the ins and outs of adding Time Intelligence, but I can't seem to find a decent tutorial.  They all seem to skip from "run the Add BI Wizard" straight to "..and here's what it looks like when you're viewing it in your cube browser."  Although it seems like a straightforward process, neither Excel nor SSDT shows me where these columns came from.  In other words, I've created Year to Date, Month to Date, and Year over Year Growth calculations, put them in the "Date \ Calendar Date" multi-level hierarchy, and made them available to my "Internet Sales-Sales Amount" measure.  Once I save and build and redeploy and process (whew!) the cube, I still don't see "Year to Date" as a field in the field chooser.

I do see "Year to Date", "Month to Date", "Year over Year Growth" in the Calculations tab Script Organizer, and I see the Parent Hierarchy is "Date.Calendar Date Date Calculations 1", but I've either done something wrong in the creation or am misunderstanding some part of the instructions because I don't see it in the cube browser.  Could somebody please provide a URL for a tutorial that would help me?  I don't see that MSDN or technet have any useful details for me.

Thanks,
Eric B.

Invoke-ASCmd : The path is not of a legal form.

$
0
0

I am trying
to pass parameters to a ps1 file and got an "Invoke-ASCmd : The path is
not of a legal form." Are you familiar with specifying UNC paths for the
parameters? If I use UNC paths without any prefix, it fails. Should the UNC
path be prefixed with filesystem:: or something else for the parameters?

Below is a
mockup of the actual command on a Scheduler:

powershell
-file \\BatchServer\Batch\ARCube\ProcessingScripts\RunXmla.ps1
-XmlaFile \\BatchServer\Batch\ARCube\ProcessingScripts\ProcessFull.xmla
-SsasServer ASServer -SsasTraceFile filesystem::\\ASServer\Logs\AR\log.txt


--

Inside the ps1 file, the parameters are passed:

param(

[string]$XmlaFile,

[string]$SsasServer,

[string]$SsasTraceFile

);

Invoke-ASCmd -InputFile $XmlaFile -Server $SsasServer -TraceFile $SsasTraceFile

If (!(Test-Path $SsasTraceFile)) {

Write-Host "Could not find trace file '$SsasTraceFile', cannot determine if processing was successful, failing..."

ExitWithCode 2

}

$traceFileContent = [io.file]::ReadAllText($SsasTraceFileRead)

if ($traceFileContent -match "ProgressReportError")

{

    ExitWithCode 1

}

else

{

    ExitWithCode 0

}




SSAS Tabular - analyse who access the tabular cubes

$
0
0

Hi,

Is it possible to analise/get a list of the users who access the tabular cube? Eventually "who", "when" and "what"?

Best Regards


What are the size limits of file *.agg.flex.data?

$
0
0

What are the size limits of file *.agg.flex.data ?These files are typically located at SSAS data directory.

While processing the cubes with "Process Index", I am getting below error message:

File system error: The following file is corrupted: Physical file: \?\F:\OLAP\<DB_Name>.0.db\<Cube_Name>.0.cub\<Name>.0.det\<Partition_Name>.0.prt\33.agg.flex.data. Logical file.

However, if I navigate to the location mentioned in the error message, the specified file is not present(at given location).

I have referred a similar query regarding Corrupted Aggregation File on ProcessIndex,

<link: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/2c1f677a-60c5-4a5f-99ec-daaffd564ecb/corrupted-aggregation-file-on-processindex> 

but cannot implement/execute the proposed solution as I don't have the associated fact tables(source data) to Perform"Process Data" operation.

I have checked the different agg.flex.data file sizes, these file sizes were varying from ~200MB to ~5GB. Is there any size limit configuration associated with these files?If yes, can we change it inSQL Server 2008 R2. Else any Update(provided by Microsoft) which takes care of such configurations.

Execution Environment:

Windows Server 2008 R2 Enterprise, SQL Server 2008 R2 + SP1 , 64GB RAM

If anyone have faced such issue earlier please help. Any help would be highly appreciated.

The FISCAL_YR hierarchy already appears in the Axis1 axis.

$
0
0

Hi,

 I have problem with below query in query designer. 

WITH MEMBER [Q1 Metric] AS
IIF(LEFT([Fiscal Year Quarter].currentmember.membervalue, 2) = "Q1",
[Measures].[Actual Revenue Asia], null),
FORMAT_STRING = "Standard", 
VISIBLE = 1
MEMBER [Q2 Metric] AS
IIF(LEFT([Fiscal Year Quarter].currentmember.membervalue, 2) = "Q2",
[Measures].[Actual Revenue Asia], null),
FORMAT_STRING = "Standard", 
VISIBLE = 1
MEMBER [Q3 Metric] AS
IIF(LEFT([Fiscal Year Quarter].currentmember.membervalue, 2) = "Q3",
[Measures].[Actual Revenue Asia], null),
FORMAT_STRING = "Standard", 
VISIBLE = 1
MEMBER [Q4 Metric] AS
IIF(LEFT([Fiscal Year Quarter].currentmember.membervalue, 2) = "Q4",
[Measures].[Actual Revenue Asia], null),
FORMAT_STRING = "Standard", 
VISIBLE = 1
select 
{[Measures].[Actual Revenue Asia],[Q1 Metric],[Q2 Metric],[Q3 Metric],[Q4 Metric]} on columns,
 NON EMPTY{ {([JBE_D_CALENDAR].[FISCAL_YR].MEMBERS,[JBE_D_CALENDAR].[Fiscal Year Quarter].MEMBERS)}}on rows 
from [Model]
WHERE ({STRTOSET(@N_3YearActualScenario,CONSTRAINED),STRTOSET(@N_2YearActualScenario,CONSTRAINED),STRTOSET(@N_1YearActualScenario,CONSTRAINED),STRTOSET(@ScenarioforAsofFiscalYear,CONSTRAINED)},
STRTOSET(@REGION,CONSTRAINED),STRTOSET(@COUNTRY,CONSTRAINED),STRTOSET(@ENTITY,CONSTRAINED),STRTOSET(@LOBGRP,CONSTRAINED),STRTOSET(@SUBLOBGRP,CONSTRAINED),STRTOSET(@LOBOwners,CONSTRAINED),{STRTOSET(@FISCALYR,CONSTRAINED),STRTOSET(@BeginPeriod,CONSTRAINED)})

A Dimension Vs Measure group relation

$
0
0

Dear All,

I am providing a scenario with an example of Measure and dimension groups.

Example: I have created a measure group for sales. Total sales amount it would show is '10,000$' if I don't keep any filters on location(these sales amount collected is from 10 different Locations).

Now, I have created a Dimension group with only 5 locations and Location is the Key column; And given a 'Regular' relation between Measure and Dimension on Location in 'Dimension Usage'.

When I browse the cube, without keeping filters on location, I expect the measure to show Sales of 10,000$. But here, it is displaying the Sales of 5 locations only.

Kindly let me know, whether it is expected behavior. What should I do to show all 10 locations sales when no filter on location is applied.


Regards Suneel

Dynamix MDX query for YearToDate total

$
0
0

Hi

I have created cube. 1 fact table and few dimensions including dimDate

I need to create a calculated member for variance.

Variance = SUM([Measures].[Amt]) starting from financial year beginning(2015-04-01 to current date) -

SUM([Measures].[Amt]) for the same period last year(2014-04-01 to current date last year)

Hope you understand it.

How can I create this calculated member?

YTD, MTD functions return empty values, probably due to old test data

$
0
0

Hello,

I have managed to use the BI Wizard for time intelligence and added YTD and MTD successfully.  I notice the values returned are empty, and I think this is due to the fact that all the test data I use is many years old.  What's the simplest way to resolve this issue so that I can see that these MDX functions return correct values?  Changing the system date on this company laptop is not an option.

Thanks,
Eric B.

Viewing all 14337 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>