Quantcast
Channel: SQL Server Analysis Services forum
Viewing all 14337 articles
Browse latest View live

Process full only today edited date

$
0
0

Hello.. I have a large data datawarehouse

every hour the end user edit the data and check bi system again

the bi system based on pivottable connected to "ssas tabuler cube"

to get latest data i should process the cube full which take a lot of resources..

firstly I tried process defualt.. its fast but it seems useless .. it doesn't update anything

then I came other idea which is column round in the fact and create partition for each round... then processed full the last round via ssis but its not full auto since every round I should create new partition... also the round data is also huge and slow somehow

last thing I came with is column Edit Date in the fact and create partition for (Today data) 

SELECT [dbo].[PERSONS].* FROM [dbo].[PERSONS] where CAST([DATE] AS DATE)=CAST(GETDATE() AS DATE)

and create partition for (other dates data) 

SELECT [dbo].[PERSONS].* FROM [dbo].[PERSONS] where CAST([DATE] AS DATE)=CAST(GETDATE() AS DATE)

... then processed full the last (Today data) via ssis but as soon the date value become not today.. the data return as before process

any one has any idea how to only process last edited and inserted data without process full which delete full data and insert it again


Help on showing a measure on a time-axis, without the sum action.

$
0
0

Hi togheter

Now I got a tool where I can setup most things with a gui and it *should* be not that hard to use. But for a newbie in the BI-World it still is way too hard for me. =)

Now I got a table like: 

<thbody></thbody>

NameEmploymentModified DateModified Time
Hans10013.02.1912:00:00
Hans8015.03.1914:00:00
Fritz10014.02.1913:00:00

And I need to make 2 repots, one showing just current values and one which showes values over a time-axis.

Pivot Table 1: (don’t sum, but show last entry by date per Employee)

NameEmployment
Hans80
Fritz100

Pivot Table 2: (don’t sum, but show last entry by current date dimension per Employee)

NameFeb 19Mar 19
Hans10080
Fritz100100

Now I did try to set the measure "Empoyment" to "SUM" and "Last Child" but SUM, just doesn't get where I need and "Last Child" did remove the whole pivot table, so there where not even anymore the Names left.

I'm also able to do some basix MDX, but i think this goes to far for me. Any hint apprieciated!

MDX - Exclude Values from a calculation

$
0
0

Hi Everyone:

I would like to exclude a few different attributes of an item dimension. Any ideas?

SUM

(-{[Item].[Prod Family].&[6.E1]&[028],

[Item].[Prod Family].&[6.E1]&[020],

    [Item].[Main Prod Grp].&[6.E1]&[E38],

    [Item].[Prod Group].&[6.E1]&[H3B],

    [Item].[Prod Group].&[6.E1]&[H3C],

    [Item].[Prod Group].&[6.E1]&[H97],

    [Item].[Prod Group].&[6.E1]&[H56],

    [Item].[Prod Group].&[6.E1]&[H39],

    [Item].[Prod Group].&[6.E1]&[H49],

    [Item].[Prod Group].&[6.E1]&[H59],

    [Item].[Prod Group].&[6.E1]&[H5A],

    [Item].[Prod Group].&[6.E1]&[H5B]}

,[Measures].[Sales])



Deploy tabular project with SSDT 15.9.9 with compatibility level 1400. Error returned: 'Unexpected column name'

$
0
0

Hello,

I cannot deploy a Tabular project with SSDT 15.9.9 with compatibility level 1400 (with 1200 it works) in Analysis Server 14.0.239.1. I get the following error:

============================
Error Message:
============================

Failed to save modifications to the server. Error returned: 'Unexpected column name: Received column 'ObjectID.Expression' in rowset 'Annotations'. Expected column 'ObjectID.Set'.
'.
----------------------------
Failed to save modifications to the server. Error returned: 'Unexpected column name: Received column 'ObjectID.Expression' in rowset 'Annotations'. Expected column 'ObjectID.Set'.
'.
----------------------------
An error occurred while opening the model. Click Details for more information.

============================
Call Stack:
============================

   at Microsoft.AnalysisServices.Tabular.Model.SaveChanges(SaveOptions saveOptions)
   at Microsoft.AnalysisServices.Tabular.Model.SaveChanges(SaveFlags saveFlags)
   at Microsoft.AnalysisServices.BackEnd.DataModelingServer.CreateCatalog(String databaseName, Int32 localeID, DirectQueryMode directQueryMode, Int32 clientCompatibilityLevel)
----------------------------
   at Microsoft.AnalysisServices.BackEnd.DataModelingServer.CreateCatalog(String databaseName, Int32 localeID, DirectQueryMode directQueryMode, Int32 clientCompatibilityLevel)
   at Microsoft.AnalysisServices.VSHost.VSHostManager.OnNewProjectPrepareSandbox(Boolean isRealTimeMode, Int32 clientCompatibilityLevel)
   at Microsoft.AnalysisServices.VSHost.VSHostManager.PrepareSandbox(Boolean newProject, Boolean& isRefreshNeeded, Boolean& isImpersonationChanged, Boolean& saveRequired, List`1& truncatedTables, Boolean isRealTimeMode, Int32 clientCompatibilityLevel)
----------------------------
   at Microsoft.AnalysisServices.VSHost.VSHostManager.PrepareSandbox(Boolean newProject, Boolean& isRefreshNeeded, Boolean& isImpersonationChanged, Boolean& saveRequired, List`1& truncatedTables, Boolean isRealTimeMode, Int32 clientCompatibilityLevel)
   at Microsoft.AnalysisServices.VSHost.Integration.EditorFactory.CreateEditorInstance(UInt32 grfCreateDoc, String pszMkDocument, String pszPhysicalView, IVsHierarchy pvHier, UInt32 itemid, IntPtr punkDocDataExisting, IntPtr& ppunkDocView, IntPtr& ppunkDocData, String& pbstrEditorCaption, Guid& pguidCmdUI, Int32& pgrfCDW)

============================

I have tested with a new project too. What's more, if I create a new project and set the workspace server to use the SQL Server, I receive the same error.

With Tabular Editor with compatibility at 1400 works fine and If I restore a Data Base with compatibility level at 1400 in the same SQL Analysis Server works also so I think the problem is in SSDT IDE. Do you know the reason?

Thank you in advance.

Nicolás.

Copying values from one measure to another using scope

$
0
0

Hi,

Below is the scenario in which i want to overwrite the value of market score withover score. If over score exists.


DimensionMarket Score       Over score
q189
q28781
q384

Here is the expected output.

DimensionMarket Score       Over score
q189
q28181
q384

Thanks


How to find increase of SSAS Cube database size growth daily ?

$
0
0

Hi,

How to find the growth of SSAS Cube database size file daily ?

Thanks


Need Help - Report Actions not working for cube action in sql server 2016

$
0
0

Hi All,

Greetings, I need help in getting the report actions work in SSAS cube. When I did the below setting and processed the cube not able to see the Report Actions in Browser.


Sreekanth Note: Please vote/mark the post as answered if it answers your question/helps to solve your problem.*****

MDX script editing error generated at each keystroke

$
0
0
I'm having the same issue in VS2012. When I edit an MDX script the error window pops open between each keystroke. I have to copy the script into notepad, edit, then return it to the MDX editor. Very annoying!!!

DATEDIFF - cannot return negatives in Tabular but can in PowerBI?

$
0
0

Using DateDiff in PowerBI as a DAX calculation allows it to return negative numbers.  Start date AFTER end date.

However, this same scenario in SSAS 2017 DAX returns the error:

"In DATEDIFF function, the start date cannot be greater than the end date"

Why is this the case, is there an update for SSAS Tabular that brings this into parity with PowerBI?

Thanks!

A duplicate attribute key has been found when processing

$
0
0
Hi

When I process one of my dimensions it fails and I get the following error:

Errors in the OLAP storage engine: A duplicate attribute key has been found when processing: Table: 'Customers', Column: 'DisplayName', Value: 'Stephen Grant'. The attribute is 'Display Name'.

I don't know if this is significant, but the attribute to which it is making reference was added through BIDS 2008 (the cube was originally created with BIDS 2005). 

There are no duplicates of 'Stephen Grant' in the DisplayName column. Not that it should matter if there were as this attribute has a cardinality of Many, with an rigid attribute relationship directly to the dimension's key attribute. The Key column for the Display Name attribute simply refers back to the same (DisplayName) column in the table.

If I delete this record, or even just update the DisplayName field from 'Stephen Grant' to something else, the dimension processes just fine. I can't work out what it is about this record that is stopping the dimension from being able to process.

Can anyone help me figure out what's going on?

Julia.

P.S. I am using SSAS 2008 on Windows Server 2008

ssas tabular - what if different measures use different anchor dates?

$
0
0

Hi we run 2016 enterprise ssas. 

we will have over 100 metrics, ie calculated members/measures in our project.  All except one or two of them are calculated as the sum of one fact (factor) divided by the sum of another fact.  The facts are separate columns in a fact table represented by a 0 or 1.  A factor can be used in more than 1 measure.

Essentially, all metrics make sense only in the context of a specific date.  That date is the  anchor date  for that metric.  Different metrics use different anchor dates.  Many use the same.  And they all “share” about 14 other dimensions.

Its important for us to be able to pivot even metrics with different anchor dates together.   I’m providing an example below.  Not only would a calculated measure need to know which date column to use, but the query would somehow need to pass the same date to all measures to be used over their respective  anchor  dates.

One of our peers believes the only way to do this in tabular is to shape what he calls a “stack”.  One column in the stack record would be the calc measure name.  2 cols would contain the numerator and denominator.  And there would be one column called “anchor date” in each stack record.   He likes this idea also because there is essentially one calculation for all measures…numerator / denominator.

My question is “is this the only way?"  Isn’t there a more traditional approach?  Sure each calc would be less uniform in a more traditional approach but we wouldn’t be painting ourselves into a corner.  And the moment a square root shows up on the scene we wouldn’t be reshaping the data.

Example:

Calc1 = sum Factor A / sum Factor B.   Anchor date is ship date.
Calc2 = sum Factor X / sum Factor Y.   Anchor date is order date.

The easier question is for “anchor date” 1/1/2019, how can I get these two calcs to show (side by side) up in my query or excel pivot by just passing 1/1/2019 once?

Maybe the tougher question, what if there are n dates that I want passed, expecting the dates to act as separate slicers?

Measure Value rounds off in SSAS cube browser

$
0
0
Some of my measure values are rounding off in the SSAS cube browser to the the nearest value like 5.11 to 5. The values are showing fine in the data source view upto two decimal places. I tried setting the format string as '$#,##0.00;-$#,##0.00' but of no use. Strangely, all other measures are showing up decimal places in the browser. Please help

Cube Partitioning taking more time than usual for SSAS cube process

$
0
0

Hi,

Previously SSAS DB processing using SSIS package(analytics Services processing task) was taking 6 hours to complete with synchronization.

I tried and partitioned all cubes in that DB into 3 partitions i.e. 1 year, 2 year and rest all. And also created SSIS jobs as below and parallely distributed many partitions into 8 SSIS task so that it also can process in parallel while processing other partitions:

1st. it will pick 1 year partitions for latest 1 year data,

2nd. it will pick 1 year and 2 year partitions for latest 2 year data,

3rd. It will pick 1 year, 2 year and rest all partitions for all of the data.

I executed 3rd SSIS job, which actually first processing all dimensions, then all partitions in 8 parallel tasks and then synchronizing db to query server. But it is taking more than 12 hours.

In previous processing, SSIS task used to select only SSAS DB name, but now I used to select every SSAS object within the task to make it run parallel.

Since 3rd job executing all SSAS DB objects, like previously it was processing whole ssas DB, I was expecting atleast it should be completed in same time (6 hours) but it took double.

Not sure what went wrong , can anyone please assist. any info/suggestion will be helpful

Regards,

Column hiding (Object level security) in SSAS giving error in PowerBI!!

$
0
0

Hello,

I have an Analysis Service Tabular Model (1400 compatibility) in which I am applying Object (Column) level security to hide one of the columns for one of the users which is a member of one of the roles and which also has Row Level Security(RLS) applied to filter on specific set of records.

However, after deploying this onto Azure Analysis Services and connecting through PowerBI Live connection, I am getting below error in PowerBI:

Query (13, 15) Column 'emp_name' in table 'emp' cannot be found or may not be used in this expression. Technical Details: RootActivityId: 001388e7-3fb9-479d-981f-3a0f2cc7861a Date (UTC): 3/18/2019 1:49:49 PM
Please try again later or contact support. If you contact support, please provide these details.

We need to hide one of the sensitive columns for one user (ex. Role1) but the same column should be made visible to another user (ex. Role2). We do not want to use Perspectives in SSAS though.

I also verified in the model.bim JSON output below and I could see that metadataPermission is set to None for the specific user/role 1

    "roles": [
      {
        "name": "Role1",
        "modelPermission": "read",
        "members": [
          {
            "memberName": "xyz@abc.com",
            "memberId": "xyz@abc.com",
            "identityProvider": "AzureAD"
          }
        ],
        "tablePermissions": [
          {
            "name": "dept",
            "filterExpression": "dept[cust_id]=\"1001\""
          },
          {
            "name": "emp",
            "filterExpression": "emp[cust_id]=\"1001\"",
            "columnPermissions": [
              {
                "name": "emp_name",
                "metadataPermission": "none"
              }
            ]

After doing some online research (see links below), it seems that there used to be a similar bug (though with calculated measures in SQL Server 2017 & SSAS):

https://prologika.com/demystifying-tabular-object-level-security-ols/

https://support.microsoft.com/en-us/help/4098732/calculation-error-occurs-when-secured-measure-is-queried-in-ssas-2017?ranMID=43674&ranEAID=je6NUbpObpQ&ranSiteID=je6NUbpObpQ-0wdVXgb9Jk0QnYWd9QTM8A&epi=je6NUbpObpQ-0wdVXgb9Jk0QnYWd9QTM8A&irgwc=1&OCID=AID681541_aff_7795_1243925&tduid=(ir__nw9npvxkp9kfri2x0h20wk909m2xm1t0o66auubx00)(7795)(1243925)(je6NUbpObpQ-0wdVXgb9Jk0QnYWd9QTM8A)()&irclickid=_nw9npvxkp9kfri2x0h20wk909m2xm1t0o66auubx00

I am using Microsoft DataTools Analysis Services extension on MS Visual Studio 2017 Enterprise for building the SSAS tabular model. The SKU/pricing tier is S0 (Standard) and deployed on Azure Analysis Service DB. Source is Azure SQL DW. 

Please suggest.

Thanks !

Joining two SCD Type2 tables with 1:M relation with complex join & filter condition in SSAS Tabular model

$
0
0

Assuming, I have below two tables Department and Employee where I am storing data for different customers(tenants) and there’s one-to-many (1:M) relationship between these two tables (i.e. one department can have 1 or more employees)

However, let’s say both the tables are SCD Type 2 i.e. storing history with effective and termination dates. There are no constraints, indexes etc. created on these tables at database level.

Department table:
cust_id dept_id dept_name   efctv_dt    trmntn_dt   Dept_key
1001    D1      IT        12-01-2018    12-31-9999  1001D1
1001    D2      HR        01-01-2019    12-31-9999  1001D2
1002    D3      Admin     02-01-2019    02-28-2019  1002D3
1002    D3      HR+Admin  03-01-2019    12-31-9999  1002D3
1002    D4      Finance   02-01-2019    12-31-9999  1002D4

Employee table:
cust_id emp_id  emp_name    dept_id efctv_dt    trmntn_dt   Emp_key
1001    E1      XYZ          D1    01-01-2019   01-31-2019  1001D1
1001    E1      XYZ-A        D1    02-01-2019   12-31-9999  1001D1
1001    E2      ABC          D2    02-01-2019   12-31-9999  1001D2
1002    E3      AXBYCZ       D3    03-01-2019   03-31-2019  1002D3
1002    E3      AXBYCZ       D4    04-01-2019   12-31-9999  1002D4
1002    E4      DEFG         D4    04-01-2019   12-31-9999  1002D4

Columns cust_id & dept_id can be concatenated together as a separate column in both the tables as a key field and used as a join between both the tables.

Department Key=Concatenate(department[cust_id], department[dept_id] )

Employee Key=Concatenate(employee[cust_id], employee[dept_id] )

Example key output values= 1001D1, 1001D2, 1002D3, 1002D4

Now let’s say we have following reporting requirements, i.e.

To filter on Date Ranges (in visualization) - assuming there's another date dimension table with all dates & hierarchy

1) When no specific date range or filter selected - show all current active employee & departments names (where, trmntn_dt = 12-31-9999)
So, expected output is:

Emp Name Dept Name
XYZ-A IT
ABC HR
AXBYCZ Finance

2) When reporting for a specific month example - Jan-2019 - show all employees & department names active as of that month. So, expected output is:

Emp Name Dept Name
XYZ IT

3) When reporting for a specific Quarter example - Q1-2019 - show all employees & department names active as of that quarter. So, expected output is:

Emp Name Dept Name
XYZ-A IT
ABC HR
AXBYCZ HR+Admin

However, the join condition in AS Tabular model with 1:M relationship between these two tables would fail because the rows are not unique in the Department table (rows for D3), which is on the one side of the relationship.

If you include efctv_dt or trmntn_dt also in the concatenated join condition in both the tables as key for joining i.e.

Department Key=Concatenate(department[cust_id], department[dept_id] ) & Concatenate(department[efctv_dt],””))

Employee Key=Concatenate(employee[cust_id], employee[dept_id] ) & Concatenate(employee[efctv_dt],””))

Example key output values= 1001D112-01-2018, 1001D201-01-2019…

However, though now the rows would be unique since we don’t expect same row twice on the same day (unless some ETL issues like process ran twice on the same day etc.)

AS tabular model doesn’t allow to create complex join/condition (like in SAP BO Universe) so that we could add below condition when joining these two SCD type 2 tables as below which might help solve some of the requirements - something like below in the Where clause of the SQL:

    dept.cust_id = emp.cust_id
And dept.dept_id = emp.dept_id
And ( calendar_date is between efctv_dt and trmntn_dt 
        Or 
       trmntn_dt = ’12-31-9999’
    )

I think for creating/calculating any measure value is still doable with lots of examples available online on DAX to filter on the dates, but what about with just the dimensional attributes ?

Is this the right approach ? How to handle these? And, that too without using surrogate keys for generating unique values in each SCD type 2 tables and referencing it as FK/reference key from Parent->Child (1:M) i.e. Dept->Emp table.

Please suggest.

Thanks!


how do we get list of aggregations which are unused on the cube ?

$
0
0

Sql server version: 2016

Is there a way to know if aggregation on the cube is unused ? I have tried to get the information on the DMVs $system.discover_object_activity (gives me count of aggregations on a partitions/measure groups being hit or missed) and $system.discover_partition_stat (aggregation name and its processed state). But I am not able to get information if a aggregation is left unused, like a list of aggregations which have been used/unused 

The cube which is being investigated has over 200 aggregations. It is cumbersome to manually go through each one of them and check for the combinations. Any help is appreciated.  

DAX to show data only until current month for tickets open at the end of each month

$
0
0

Hi,

I have the following three measures to arrive at EOMOpen :

OpenTkt:=CALCULATE(DISTINCTCOUNT(TicketIncoming[ID]), FILTER(all('Date'[Date]), 'Date'[Date]<=max('Date'[Date])),USERELATIONSHIP('Date'[Date],TicketIncoming[OpenedDate]))

CloseTkt:=

VAR maxDate =
    MAX ( 'Date'[Date] ) + 1
RETURN
    CALCULATE (
        DISTINCTCOUNT ( TicketIncoming[ID] ),
        FILTER (
            ALL ( 'Date'[Date] ),
            'Date'[Date] <= maxDate
                && 'Date'[Date] <> BLANK () 
        ) )

EoMOpen:=OpenTkt - CloseTkt

This gives me correct numbers,the only issue is ,it shows me data for all months after currentmonth.We dont want to see data for future months.Example: we have to see data all months until March 2019,not for April onwards.

How to fix this.Below is what i see and i dint want to see data for months 4-12 currently.



Specifying thousand separator when using SSAS

$
0
0

My question might be more close to PBI, but...maybe some of you can answer my question anyway.

My problem is that I don't know how to specify the thousand separator, which character should be used for it --> https://i.imgur.com/19nzW8p.png. As you can see, in VS and Excel it uses the "correct" one (*well...in the end I will use space, but for demo purpose / trying to figure this out, I decided to change it and see...where can I change it*), dots, but in PBI it still shows spaces and not dots what I specified on my PC --> https://i.imgur.com/KNGnFkS.png. The SSAS is installed on my PC, so I'm connecting to the localhost.

Since I'm connecting to an SSAS model with Live Connection, most of the changes are not available, e.g. "Regional Settings" --> https://i.imgur.com/LNCsWXs.png, so that's a dead end.

So...where and how should I change the thousand separator character?

Thanks in advance.

Multiple message with "The 'XXX' dimension was not generated because it is bound to a user table".

$
0
0

Hello guys, I have a requirement to implement two different date Schema in the Cubes(SSAS) for my work. 1) with fiscal year Oct to Sep... 2) Calendar year  JAN to DEC I already have the fiscal year and is not trying to implement the calendar year.

I am trying to create a new date dimension for calendar year but each time I try I get a bunch of error in Visual Studio. I have attached the image for your reference. Can some please tell me why I cannot do this. 

Any suggestion is greatly appreciated.

what if our analysts want to mix ssas tabular with engine tables in same query?

$
0
0

Hi. We run ssas 2016.

I like the idea of ssas tabular being the only place our calculated measures are defined.

But am concerned that some analyst's hands will be tied if the things they want to pivot on in conjunction with whats available in the tabular cube need to be partially sourced by other data sources, most likely sql server, sometimes db2.

What tool(s) (if any) can be used to support such cross platform joins?

Viewing all 14337 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>