Quantcast
Channel: SQL Server Analysis Services forum
Viewing all 14337 articles
Browse latest View live

Tabular speed using Excel

$
0
0

SQL 2014

We have a 5 MB tabular database, with 10 partitions and about 25,000 records per partition.  We're accessing this db from Excel and using slicers to view/filter the data.  We're seeing what we think is slow performance in Excel, and I want to reach out to the experts to see if it is indeed slow, and if there is something we can do about it, or if the performance is what we should expect.  When we click on a filter, Payroll Date, it takes about 2 seconds for Excel to run the OLAP query and change the data.  I've profiled the SQL part and know it only takes about 250ms for SQL to do it's part.  So my question is about the other 1.75 seconds.  Is this as good as it gets, or are there other things we can do to improve the performance in Excel?  Our performance is essentially the same if we're inside our network or outside, VPN'd in.  This is our first experience with Tabular and Excel, so any tips are appreciated.

Thanks in advance.


André


Best way to add additional parent child attribute values.

$
0
0

I have a parent Child attribute in my dimension.  I am currently displaying the correct ID value as the business wants.  So now they can see the rollup of the ID(intOrgNodeID )   values.They would also like to see the same rollup of the Name (vcharOrgNodeName)  for this ID.However they do not want it concatenated.  They want to be able to see them separate.

You cannot create two parent child attibutes in one dimension so not sure if there is some simple trick to make this work? It seems like there should be some simple trick for this. 

My dimension table looks something like this

intdimOrgNodeID int Key (surreget key)

intOrgNodeID int (Actual ID)

intDimParentOrgNodeID

vcharOrgNodeName

In the Propertys I have set this below.

KeyColumns  = tbldimOrgNode.intDimParentOrgNodeID

NameColumn = tbldimOrgNode.intOrgNodeID


Ken Craig

how to incorporate the same line using sql in sql server?

$
0
0
<pre>NIK     IN/OUT             DATE
10026   1        2015-07-07 14:15:09.000
10026   0        2015-07-06 14:16:28.000
10026   1        2015-07-06 14:16:37.000
10026   0        2015-07-08 05:26:17.000</pre>

i want the result like below:

<pre>
NIK     DATE IN                     DATE OUT
10026   2015-07-07 14:15:09.000       null
10026   2015-07-06 14:16:28.000   2015-07-06 14:16:37.000
10026   null                      2015-07-08 05:26:17.000
</pre>

how to incorporate the same line based on field (in-out) using sql in sql server ?

Dynamic Security in a denormalized Parent-Child dimension Table

$
0
0

Hi guys, I need your priceless help again:

I have a parent child relationship in a table with a fixed depth, let´s say Region-->Area-->Country
I denormalized the table to have something like this

flatenned hierarchyThen, to implement dynamic security, I think in a bridge table with the userId and the CountryId, then with a measure group and a measure which count the combination of user/country I can proof the security using the non empty function.

My question is how can I also set security for the levels above the leaf members, let´s say, I want to assign an user to the Area level or Region Level. I don't know exactly which key could I include in the bridge table.

I may want to keep the id´s of the original table in the different levels.

Any comment will be appreciated.
Kind Regards,

Session stats and other

$
0
0

Hi to all,

I'm new in Sql Server and i have 2 questions:

  1. Is it possible to get any statistics about user login session duration ?
  2. is it possible to get time statistics  for sql queries?

Thank You in advance.

how do I insert new members in quantity?

$
0
0

Enterprise 2014 SQL Server - SSAS

I have a need to  insert 50 - 100 new members each week. 

Is there an SSIS procedure that will do this? 


ITProTek


Performance issue in browsing SSAS cube using Excel for first time after cube refresh

$
0
0

Hello Group Members,

This is a continuation of my earlier blog question - https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices

As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.

I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around 4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.

We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.

Is there, any way we could reduce the time taken for first attempt ?

As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.

Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after daily cube refresh?

Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take a significant time based on the bandwidth of the network and connection.

Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?


Best Regards, Arka Mitra.

to open/create cubes

$
0
0
installed sql 2014 dev. version, what version of BIDs need to install to create SSAS cubes. installed Visual Studio 2013 but i don't see any option to open SSAS database or create SSAS project.

While installing SQL 2014, selected all features

thanks
V

TopCount with custom date range

$
0
0

I am using a client tool that allows me to use MDX to specify the axis member of a chart. I need to show on the y axis the top 10 customers in sales order count over the last 12 full months. Because of the way the tool works, all of my logic must be in the axis expression. I cannot use a full MDX query (e.g. with a Where statement) to express this.

TopCount works perfectly if I limit the time period to a specific member, like, limited to orders for June of 2014:

TOPCOUNT([Customer].[Customer].children,10,([Measures].[order Count],[Date].[Month ID].[201406]))

A test query would look like this:

Select [Measures].[Order Count] on 0,
TOPCOUNT([Customer].[Customer].children,10,([Measures].[order Count],[Date].[Month ID].[201406])) on 1
from MyCube

But I need the date range considered in the count to be a specific range, like over the past 12 months, so I was hoping something like this would work:

TOPCOUNT([Customer].[Customer].children,10,([Measures].[order Count],{[Date].[Month ID].[201406]:[Date].[Month ID].[201506]}))

But when I do this I get the message: 

The TOPCOUNT function expects a string or numeric expression for the 3 argument. A tuple set expression was used.

How can I do this?  Actually, the date range I want to use is already defined in a named set.  Ideally, the orders considered in the top 10 are only those orders in the named set, like:

TOPCOUNT([Customer].[Customer].children,10,([Measures].[order Count],[Specific Month Set]))

But when I try using only the named set I get:

The dimension '[Specific Month Set]' was not found in the cube when the string, [Specific Month Set], was parsed.




SSAS - role playing dimension - limiting the irrelevant

$
0
0
Hi,

I've been using several role playing dimensions for my cube. One for "Sales Turnover", "Sales date", "Inventory turnover", etc. There is a fact table that has Valid From and Valid To information and I am trying to have one other dimension "Time Period" to help users play around with one dim rather than 2. Found a solution on http://www.purplefrogsystems.com/blog/2013/04/mdx-between-start-date-and-end-date/ and implemented it.

I am having a difficulty here : The SQL Profiler queries show me that the base query uses "Time Period" dimension but the overlaying querying uses "Inventory turnover" dimension and eventually results in NULL.

I realized that its the other role playing dimensions that are causing issues so:
1. I wrote a scope to make the calculated measure null by default
2. Then a scope for "Time Period" dim to calculate based on the valid from and valid to ranges as mentioned in the website above.

This doesnt help as well. I also played around with "IgnoreUnrelatedDimension" property but this doesnt change the query. 
To debug even a little more, I removed the other role playing dimensions one by one and the MDX changed to take the one thats left. So I got rid of all role playing dims except "Time Period", "Valid From", "Valid To" (Which are all used in the calculation). Now the query was fine. 

I would not be able to chuck the other dims(I'd have to bring them back). Please could anyone suggest me a solution to limit the calculated measure to the role playing dimension thats used in it(or only the ones used in dimension usage) ?

Cube Processing

$
0
0

Dears

I have a financial cube, developed using SQL Server 2012. It has a weekly partitions. I noticed an abnormal increase in revenues in a day within a week. I returned back to the original table and made a query to calculate revenues of that day and the result was normal and different from cube result. I did the same with the view that the measure group is based on, and the result also normal.

I reprocessed that partition (full process), and deleted it and created it again and processed it and nothing changed.

Why the cube result is different from the table or view result? What could be the reasons behind that, and what are possible solutions?

I appreciate your assistance.

Regards

SSAS Question about "grant write access to cube"

SSAS Tabular - Share dimensions between projects?

$
0
0

Imagine that you have 2 SSAS tabular Projects, for example Sales and Stocks,

is there anyway to share dimensions between them, to avoid duplication of work such as, for example, in a Dim Customer, having to hide columns, unhide and renaming specific columns etc etc

Regards



Urgent! Issue with primary key duplicate value while not existing

$
0
0

Hi all,

I have an error in production saying "Cannot insert duplicate key in object xxx". The duplicate value is 287490.

This is the exact last ID but the data behind it is not duplicate, I verified. We have this issue every so many months in SSMS where the auto incremental gets stuck and needs to be adjusted. 

However, with a sequence I managed to do this, but how do I do this with a primary key?

Thanks in advance!

Statistical function results in calculated members

$
0
0

Hi Again:

I'm working with the statistical functions Stdev and Median with calculated members.  The only way I can get the "correct" answer is if I have a dimension at the same granularity as the Fact table (Actually it's a degenerate dimension of the FACT table itself).  Otherwise it seems that the measure I'm using with Stdev returns results that are so wildly high, I think it must be acting on the SUM of the measure; because the measure itself is a Summed one.  Does that seem right?

When I try to use the coordinates in the Stdev function, it seems like it is using the wrong set of data points :

stdev( ( [Date].[Date].[Date].members, [Parameter].[Parameter].[Parameter].members ), [Measures].[Value])  returns answers in the thousands when it should be more like 2.5

When used with a query, there would only be a single date member and a specific parameter member.  The total number of fact records is between 200 and 500 with values that range between 0 and 150.  This is the version that gives me answers that resemble the total sum of the [Measures].[Value].

If I add the dimension that is essentially a row number from the fact table, it gives the right answer (slowly, but that will be a different post :)  )

stdev( ( [Date].[Date].[Date].members, [Parameter].[Parameter].[Parameter].members, [FACTTable].[FACTTable].[KeyField].members ), [Measures].[Value])

I get the feeling I'm missing the point... :(  Any guidance greatly appreciated!


Cheers and thanks, Simon


Multiple Dimension values in column labels and calculated measure require using the total of one dimension value

$
0
0

Hi

I've Pivot table created from SSAS cube as shown below where I am having one measure and two dimensions in column labels. I want to add a calculated column = Dimension3-Val1Total/Dimension-Val2.

Please advise on how to achieve this.

Measure 1              Columns Labels

                              Dimension 3-Val1                                             Dimension 3-Val1  Total

                              Dimension2-Val1             Dimension2-Val2

Row Labels

Dimension1

Thanks

Auditing Analysis services roles

$
0
0

Hi,

We would like to be notified once Analysis service Administrator role or any other databases roles are changed (role created/removed or users added/removed). What could be the easiest way to accomplish this?

Thanks

Data not showing correct after modification were done to underlying data in the datawarehouse

$
0
0

Hi ,

I am having situation where the data was refreshed and some dimensions table were updated in the datawarehouse  and cube was fully processed .

Now if i query the datawarehouse i can see the data falling under correct dimension category according to the new data modifications  but if i browse the data in cube its not displaying correct data that is data is not falling under correct dimension category its still falling under old dimension category that it was having before data was refreshed.

Any idea whats causing this and is there anything that i have to do in the cube design to handle this other then processing cube fully ?

thanks


adhikari707

Invalid at TOP Level of Document

$
0
0

Hi ,

I am getting the following error while creating Partition Through DTS

Invalid at TOP Level of Document

Please suggest me whats going wrong

Regards,

Sanjeevan



blog:My Blog/

Hope this will help you !!!
Sanjeewan

Query Performance on Case statement

$
0
0

Hello ,

   Can anyone kindly suggest how can I improve the below query .

WITH
SET [AS] AS
TOPCOUNT(CASE 'MF' WHEN  "MF" THEN [MANUFACTURER].[Manufacturer].ALLMEMBERS WHEN "R" THEN [Geography].[State].ALLMEMBERS WHEN "C" THEN [Product].[OTC1].ALLMEMBERS END,20,(CASE 'MT'
WHEN "MT" THEN [Measures].[Indirect Universe Retail Value]
WHEN  "WT" THEN [Measures].[Indirect Retail Value] END))
select [AS] on 0 from Cube

Presently its takes around 25 sec to run would like to get it down by 10 sec.

Regards,

Bharath

Viewing all 14337 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>