October 14, 2014

SSRS report header and footer get cut off when exported to word format

Problem: I had an experience of header and footer cut off while printing the exported SSRS report to word format.

According to Riccardo Muti from Microsoft  "The margins in the word document do match those in the report at 2 cm or 0.79 in. However, Word and SSRS treat page headers and footers differently. Word actually positions them inside the page margins, whereas SSRS positions them inside the area that the margins surround. As a result, in Word, the page margins do not control the distance between the top edge of the page and that of the page header (or similarly for the page footer). Instead, Word has separate "Header from Top" and "Footer from Bottom" properties to control those distances. Since RDL does not have equivalent properties, the Word renderer sets these properties to zero."

Reference:
http://connect.microsoft.com/SQLServer/feedbackdetail/view/614558/word-export-sets-margin-top-margin-bottom-to-0mm

I have the solution so far is before printing the report from Word, we have to increase the size of header and footer. This will help to print the report from word format without header and footer cut off.

September 4, 2014

SSRS Alternate Row Color

This post will help to set alternate row background color in SSRS tablix report using an expression.

1) Select the row from tablix then go to properties (Press F4) and then go to expression of background color property.


2) Use this expression:
                                    = IIf(RowNumber(Nothing) Mod 2 = 0, "Silver", "Transparent")

Then, click OK.


Here, RowNumber(Nothing) will work in each row to give row number in sequential order.

(RowNumber(Nothing) Mod 2 means RowNumber(Nothing) will be divided by 0 and it should give remainder as 0 or 1. Then, if condition is used to give the color for its corresponding result.

3) Then, run the report to see the output. You should get something like this.

Hope, this will help you.

August 25, 2014

How to import data into SQL Server using TSQL

There are features such as Import/Export and SQL Server Integration services to get the data into SQL Server. We can get the data directly into SQL Server using SQL command such as
BCP, BULK INSERT, OPENROWSET, OPENDATASOURCE, OPENQUERY and LINKED SERVERS. 

August 20, 2014

SQL - Case Statement

Case statement is used as conditional operator in Structure Query Language which compares the expression and returns the possible results. Case statement can also be used to evaluate a set of Boolean expression. Most often Case statement is useful in stored procedures, views and formula for a particular column while designing the table.

Syntax:

CASE expression
         WHEN expression1 THEN expression1
         WHEN expression2 THEN expression2
          .....
ELSE expressionN
END

--------------------------------------------------------------------------------------------------------------------------

For example:

DECLARE @Val INT = 1

SELECT
CASE @Val
WHEN 1 THEN 'First'
WHEN 2 THEN 'Second'
WHEN 3 THEN 'Third'
ELSE 'Not Known'
END

Result: First
--------------------------------------------------------------------------------------------------------------------------

DECLARE @Val INT  = 5

SELECT
CASE
WHEN @Val < 0 THEN 'Negative'
WHEN @Val < 3 THEN 'Less than 3'
WHEN @Val > 3 THEN 'Greater than 3'
ELSE 'Value not assigned'
END

Result: Greater than 3

--------------------------------------------------------------------------------------------------------------------------

Formula for a particular column while designing the table.

Below, you can see on the formula
(case [RoleID] when (1) then '100' when (2) then '75' when (3) then '50' else '0' end)


Result:



When Role is assigned to the Sales man then it will automatically set the value on the SalesTarget column based on the RoleID assigned.

August 8, 2014

SQL Server - Split the string and insert into table

This script using common table expression below will help to split the given string and insert into the data table.

DECLARE @TempTable TABLE(Name varchar(50))

DECLARE @String VARCHAR(255) = 'Kale,Gore,Nepte,Chepte'

;WITH CTE AS
(
    SELECT LEFT(@String, CHARINDEX(',', @String) -1) AS Name, RIGHT(@String, LEN(@String) - CHARINDEX(',', @String)) AS RemainingString
    UNION ALL
    SELECT LEFT(RemainingString, CHARINDEX(',', RemainingString) -1) AS Name, RIGHT(RemainingString, LEN(RemainingString) - CHARINDEX(',', RemainingString)) AS RemainingString
    FROM CTE
    WHERE CHARINDEX(',', RemainingString)>0
    UNION ALL
    SELECT RemainingString AS Name, NULL AS Remainder
    FROM CTE
    WHERE CHARINDEX(',', RemainingString)=0
)

Insert into @TempTable (Name) SELECT Name FROM CTE

SELECT * FROM @TempTable


July 18, 2014

SQL Server - PATINDEX() function

PATINDEX() function helps to return the starting position of a pattern in a string. It takes two arguments, the pattern to be searched and the expression (string), and is not case sensitive. This will return zero if it doesn't find the pattern.

Syntax:

PATINDEX('%pattern%, expression')

This function supports same as LIKE operator which includes % wildcard for one or more characters and the _ wildcard for any single character.

The string that is enclosed inside % character is the pattern. The pattern can also be used as a pattern using character such as "_", "[]" and [^].

% helps to match one or more characters.
_  helps to match any single character.
[] helps to match any character within the []
[^] helps to match all the character that is not in [^]

It would be more clear how PATINDEX works in the SQL Server looking at the example below:

DECLARE @var VARCHAR(255)= 'PATINDEX() will return zero if it does not find the pattern.'
SELECT
            PATINDEX('%n_t%', @var) AS 'not',
            PATINDEX('%pat%', @var) AS 'PATINDEX',
            PATINDEX('%pat[^i]%', @var) AS 'pattern',
            PATINDEX('%w[a-i]%', @var) AS 'will'


July 16, 2014

Current Time - SQL Server

You can get the current datetime from SQL Server in three different ways:

1) SELECT CURRENT_TIMESTAMP

2) SELECT {FN NOW()}

3) SELECT GETDATE()

July 11, 2014

Difference between Temp Table, Table Variable and Common Table Expression (CTE)

Temp Table in SQL Server are temporarily created table in TempDB database. TempDB can be found on System Databases. It is stored as an object and created with the help of single pound sign (#).


Temp table acts just like normal table, it can have primary keys, constraints, indexes and statistics. It can be referenced in different places within the procedures but are limited to the SQL Server session or connection. Its better to use Temp table when there is lot more data to be handled with the help of indexes and statistics.

Example:

CREATE TABLE #TEMP
(
        ID INT,
        FIRSTNAME VARCHAR(50),
        LASTNAME VARCHAR(50)
)

GO

INSERT INTO #TEMP VALUESA (1, 'Avesh', 'Dhakal')

GO
SELECT * FROM #TEMP

Table Variable is like a temp table and are created in TempDB database. The main difference is primary key can be created in Table Variable but there cannot be indexes and statistics.  Primary key and unique constraint in the table declaration will automatically create an index. It acts much like variables in their scoping variables. They are created when they are declared and gets dropped once it comes out of the batch. If there isn't much more data to be handled and does not require indexes and statistics then Table variable can be used.

DECLARE @TABLE TABLE
(
        ID INT,
        FIRSTNAME VARCHAR(50),
        LASTNAME VARCHAR(50)
)

GO

INSERT INTO @TABLE VALUESA (1, 'Avesh', 'Dhakal')

GO
SELECT * FROM @TABLE

GO  -- table variable is not valid after this GO ends the batch
SELECT * FROM @TABLE -- This will throw an error because

Common Table Expression (CTE) also act like a temp table/view, but are not created in TempDB. It is created in memory. 

June 22, 2014

SQL Server - One to one record mapping between two table without any common column

Today I had to write a script that maps one to one records without any common columns between two tables. I think it would be helpful for you as well.

declare @Customer table (CustomerId int, CustomerName varchar(50))
declare @Branch table (BranchId int, BranchName varchar(50))

insert into @Customer values (1, 'Kale')
insert into @Customer values (2, 'Gore')
insert into @Customer values (3, 'Sailo')
insert into @Customer values (4, 'Mailo')

insert into @Branch values (1, 'Dharan Branch')
insert into @Branch values (2, 'Biratnagar Branch')

;with Customer (RowNumber, CustomerId, CustomerName) as
(
    select row_number() over(order by CustomerID) as RowNumber, * from @Customer
)
,Branch (RowNumber, BranchId, BranchName) as
(
    select row_number() over(order by BranchId) as RowNumber, * from @Branch
)
select cust.CustomerId, cust.CustomerName, bra.BranchId, bra.BranchName
from Customer as cust
left join Branch as bra
on cust.RowNumber = bra.RowNumber

June 17, 2014

Comman Table Expression and its usage

According to Microsoft:

A common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE or CREATE VIEW statement. A CTE is similar to derived table in that it is not stored as an object and lasts only for the duration of the query. Unlike a derived table, a CTE can be self-referencing and can be referenced multiple times in the same query.

A CTE can be used to:
  • Create a recursive query.
  • Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.
  • Enable grouping by a column that is derived from a scalar subselect or a function that is either not deterministic or has external access.
  • Reference the resulting table multiple times in the same statement.
Using a CTE offers the advantages of improved readability and ease in maintenance of complex queries. The query can be divided into separate, simple logical building blocks. These simple blocks can then be used to build more complex, interim CTEs until the final result set is generated.

CTEs can be defined in user-defined routines, such as functions, stored procedures, triggers, or views.

Now, I would like to show some examples so far I have experienced.

For Example:

;WITH tbl(Id, AvgValue)
AS
(
       SELECT Id, Avg(Value) as AvgValue
       FROM Quantity
       GROUP by Id
)

SELECT Name, AvgValue
From tbl INNER JOIN
customer as cst on cst.id = tbl.id
Order by Name

In the above example, you can see we have used subquery in order to use in the main query. It's kind of virtual view.

--------------------------------------------------------------------------------------------------------------------------

Lets suppose there are 3 tables which holds information about Customer Name and their respective County. First table named Customer contains their basic info, second table County and third table named CustomerCounty has relationship between Customer and County table.

;WITH tblCTE (RowNumber, CustomerId, CountyName, County) AS
(
  SELECT 1, CC.CustomerId, MIN(C.CountyName), CAST(MIN(C.CountyName) AS VARCHAR(Max))
  FROM CustomerCounty as CC
  INNER JOIN County as C ON CC.CountyId = C.CountyId
  GROUP BY CustomerId

  UNION ALL

  SELECT CTE.RowNumber + 1, CC.CustomerId, C.CountyName, CTE.County + ', ' + C.CountyName
  FROM CustomerCounty as CC
  INNER JOIN County as C ON CC.CountyId = C.CountyId
  INNER JOIN tblCTE as CTE ON CTE.CustomerId = CC.CustomerId
  WHERE C.County > CTE.County
)

SELECT CTE.CustomerId, S.Name, Areas
FROM tblCTE as CTE
INNER JOIN Customer as C ON C.CustomerId = CTE.CustomerId
INNER JOIN (SELECT CustomerId, MAX(RowNumber)
AS Max FROM CTE GROUP BY CustomerId) as tbl
ON CTE.RowNumber = tbl.Max AND CTE.SalesmanId = tbl.CustomerId
ORDER BY CustomerId

In the above example we have see Common Table Expression has two queries, first one acts as a base query and another acts as a recursive query. The second query is using CTE so this helps in recursive processing. The above example helps to concatenate string of different row into single one.

Output:

CustomerId Name Areas
1                 Harke       Luton, Bedford, Milton keynes
2                 Matt         Hitchin, Stevenage

June 14, 2014

SQL Server - Len() vs DataLength()

DataLength returns the length of any expression in bytes which also includes trailing spaces if exits. DataLength is specially useful with varchar, varbinary, text, image, nvarchar and ntext data types because these data types can store variable-lenght data.

Len returns the number of character, rather than the number of bytes, of the given expression excluding trailing spaces.

So, we can say Len function will first right trim the expression and counts the expression where as DataLength function gives the storage space required for the characters.

For Example:

select Len('Avesh') -- Returns: 5
select Len('Avesh ') -- Returns: 5
select Len(' Avesh') -- Returns: 6
select Len(' Avesh ') -- Returns: 6

select DataLength('Avesh') -- Returns: 5
select DataLength('Avesh ') -- Returns: 6
select DataLength(' Avesh') -- Returns: 6
select DataLength(' Avesh ') -- Returns: 7

June 11, 2014

SSRS Join Function - Display/Set multi -value parameter

Join function helps to display array value.

On the parameters tab of the query definition, we need to set the parameter value

=JOIN(Parameters!.Value, ",")

To display multi-value parameter in textbox

=JOIN(Parameters!.label, ","

June 10, 2014

SSRS - Passing Multiple Parameters with Stored Procedure

1) Add data source as needed.
2) Create dataset as needed. Below, you can see there are 4 parameters. We will focus on contractname dataset for multi values parameter.
3) You can see, TSQL stored procedure is used to get the list of contract name for contractname dataset. This stored procedure returns the list of contract name.


4) Then you need to go to parameter. Right click on contractname parameter -> go to properties.
5) Define the values of dataset from Available Values.


6) In the general tab of the properties, select Allow multiple values

7) Run the report. You will be able to see as below.

8) When the parameter is passed on the stored procedure, it should be like
Select * from ContractTable where ',' + @contractname + ',' Like '%,' + Contract + ',%'
and date between @startfiscalyear and @endfiscalyear and quarter = @quarter

June 8, 2014

SSIS: Cannot Create Connector, The destination component does not have any available inputs for use in creating a path


When creating SSIS package, I was trying to link the data flow source to destination I got the error below.
 
 


After looking properly, I realized that instead of using data flow OLE DB destination, I was using data flow OLE DB source in the data flow destination. It was a quick fix, I just had to replace OLE DB Source with OLE DB destination.

May 27, 2014

SQL Server - Row_Number() vs Rank() vs Dense_Rank()

ROW_NUMBER(), RANK() AND DENSE_RANK() are the ranking functions which helps to assign the value for every row. These function are not only useful for assign the value but it also helps in solving complex queries such as getting top row of each value of the column/table then deleting or updating it, deleting or updating duplicate value etc.

ROW_NUMBER() helps to assign new row number for every row, regardless of duplicates within a partition.

RANK() also helps to assign new row number but it assigns for every distinct row, leaving gaps between groups of duplicates within a partition.

DENSE_RANK() is same as a rank() but it assigns new row number for every distinct row, leaving no gaps between groups of duplicates within a partition.

Syntax:

ROW_NUMBER() OVER (ORDER BY CLAUSE)
ROW_NUMBER() OVER (PARTITION BY CLAUSE ORDER BY CLAUSE)

RANK() OVER (ORDER BY CLAUSE)
RANK() OVER (PARTITION BY CLAUSE ORDER BY CLAUSE)

DENSE_RANK() OVER (ORDER BY CLAUSE)
DENSE_RANK() OVER (PARTITION BY CLAUSE ORDER BY CLAUSE)


For Example:

DECLARE @RANKORDER TABLE
      (ID INT,
       FIRSTRANK INT,
       SECONDRANK INT)

INSERT INTO @RANKORDER VALUES (1, 1, 1)
INSERT INTO @RANKORDER VALUES (2, 2, 1)
INSERT INTO @RANKORDER VALUES (3, 2, 2)
INSERT INTO @RANKORDER VALUES (4, 3, 2)
INSERT INTO @RANKORDER VALUES (5, 3, 3)

SELECT *,
ROW_NUMBER() OVER (ORDER BY SECONDRANK DESC) AS ROWNUMBER,
ROW_NUMBER() OVER (PARTITION BY FIRSTRANK ORDER BY SECONDRANK DESC) AS ROWNUMBERPARTITION,
RANK() OVER (ORDER BY SECONDRANK DESC) AS RANK,
RANK() OVER (PARTITION BY FIRSTRANK ORDER BY SECONDRANK DESC) AS RANKPARTITION,
DENSE_RANK() OVER (ORDER BY SECONDRANK DESC) AS DENSERANK,
DENSE_RANK() OVER (PARTITION BY FIRSTRANK ORDER BY SECONDRANK DESC) AS DENSERANKPARTITION
FROM @RANKORDER
ORDER BY SECONDRANK DESC


The above examples shows ROWNUMBER column returns sequential number to each row based on the sorting order where as the ROWNUMBERPARTITION column returns the values based on the partition or grouped by FIRSTRANK.

Similarly, RANK column also returns sequential number to each row but if duplicate value is found then it assigns same rank number and leaves the gaps between the group of duplicates.

DENSERANK returns value same as RANK does but it fills the gaps. 

May 23, 2014

SQL Server Management Studio - Save change is not permitted

When we try to make change in the existing table and save it in SQL Server 2008 Management Studio, we may get this error:

"Saving changes is not permitted. The changes that you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that require the table to be re-created."


This problem occurs because the Prevent saving changes that require the table re-creation option is enabled in the SQL Server Management Studio which can be found in the Designer section of SQL Server Management Studio Tools - Options. This option is enabled so that user won't be able to make any structural change through SSMS that require table re-creation such as

  • Adding a new column
  • Reordering the columns in the table
  • Changing the datatype of a column
  • Changing the Allow Nulls setting of a column


To get rid of this problem, you need to Uncheck/Turn off Prevent saving changes that require table re-creation but keep in mind that after you turn off this option, the existing change tracking information is completely deleted when the table is re-created.

To ensure change tracking information is enabled in the table you need to go to Change Tracking item which can be found in the table properties of the table and check whether the value is True or false. If the value is false, you need to make it true.

May 19, 2014

Introduction to MongoDB

MongoDB is an open source non-relational JSON store. It does not store data in relational tables, it is schema less.

MongoDB is document oriented and has dynamic schema. It does not support joins and does not have transaction against multiple document.

JSON (Java Script Object Notation) is the way we express data inside MongoDB.

JSON (Java Notation Object Notation):

JSON is light-weight data-interchange format. There are two basic data structure inside JSON:
1)  Array - Lists of things
2) Dictionaries - Associated Maps

These can be combined in arbitrary ways. Array inside JSON is represented inside square brackets and list of items in between. E.g; [...]. Whereas Dictionaries are represented inside curly brackets and its a bunch of key and value pairs. E.g; {keyword:value}. The arbitrary number separated by commas.

For more: http://json.org/

The JSON document is composed of field and value pairs. The value may consists of other documents, arrays or arrays of documents. There will be unique identifier called ObjectID.

For Example:

{Name: "Avesh", Sex: "Male", Group: ["Developer", "Analyst"]}

MongoDB provides high performance, high availability and automatic scaling.

High Performance:

MongoDB provides high performance data persistence. Supports for embedded data models reduces I/O activity on database system. Indexes support faster queries and can include keys from embedded documents and arrays.

High Availability:

MongoDB provides high availability with the help of MongoDB's replication facility, called replica sets. This replica sets provides automatic failoverand data redundancy.

Replica set is agroup of MongoDB servers that maintain the same data set, providing redundancy and increasing data availability.

Automatic Scaling:

MongoDB provides horizontal scalability as a part of its core functionality such as automatic sharding distributes data across a cluster of machines and replica sets can provide eventually-consistent reads for low-latency high throughput deployments.


The latest version of MongoDB can be found on:

http://www.mongodb.org/downloads

Source:
http://docs.mongodb.org/manual/core/introduction/

April 30, 2014

SQL Server Date & Time - Data Types and its Function

For more details: http://msdn.microsoft.com/en-us/library/ms186724.aspx

Date & Time Data Types:

For more detail about data types, you can see in sys.systypes

date:

You can find this on SQL Server 2008, 2008 R2 and 2012. Its default literal format is YYYY-MM-DD

Syntax:

date

For example:

Declare @date date1;
Declare @date date2;

set date1 = '20-09-2012'
set date2 = '20-09-2012 12:40:30'

Select date1
Output: 2012-09-20
Select date2
Output: 2012-09-20

time:

You can find this on SQL Server 2008, 2008 R2 and 2012. This will return time of the day without timezone and is based on 24 hours.

Syntax:

time[(fractional second precision)]

For Example:

Declare @time4 time(4)
Declare @time7 time(7)
Declare @time time

set time4 = '12:45:25.1234'
set time7 = '12:45:25.1234567'
set time = '2012-09-10 12:45:25.123' 

select time4
Output: 12:45:25.1234
select time7
Output: 12:45:25.1234567
select time
Output: 12:45:25.1230000

smalldatetime:

You can find this on SQL Server 2000, 2005, 2008, 2008 R2, 2012. This returns date with time of a day (2 hour format) with seconds always 00 and without fractional seconds. Seconds are rounded to minutes depending upon the value of seconds. The value which is less than 29.998 is rounded to the nearest minutes and the value which is greater than 29.999 is rounded to its nearest minutes.  This will not return fractional seconds.

Syntax:
small datetime

For Example:

Declare @smalldatetime1 smalldatetime = '2012-09-13 12:45:31' -- will round to next minute
Declare @smalldatetime2 smalldatetime = '2012-09-13 12:45:29' -- will not round to next minute

Select @smalldatetime1
Output: 2012-09-13 12:46:00
Select @smalldatetime2
Output: 2012-09-13 12:45:00

datetime:

You can find this on  SQL Server 2000, 2005, 2008, 2008 R2 and 2012. This returns date with time of a day and fractional seconds based on 24 hour.

Syntax:
datetime

For Example:

Declare @datetime1 datetime = '12-23-35'
Declare @time1 time(4) = '11:10:05.1234'
Declare @datetime2 datetime = @time1

Select @datetime1
Output: 2035-12-23 00:00:00.000
Select @datetime2
Output: 1900-01-01 11:10:05.1234

datetime2:

It can be considered as a extension of datetime but this holds large date range along with default precision. You can find this only in SQL Server 2008, 2008 R2 and 2012.

Syntax:
datetime2 [(fractional seconds precision)]

For Example:
Declare @datetime2_1 datetime2(4) = '12-13-25 12:32:10.1234'
Declare @datetime2_2 datetime2 = '12-13-25 11:32:10.1234567'
Declare @datetime2_3 datetime2 = '12-13-25'

Select @datetime2_1
Output: 2025-12-13 12:32:10.1234
Select @datetime2_2
Output: 2025-12-13 11:32:10.1234567
Select @datetime2_3
Output: 2025-12-13 00:00:00.0000000

datetimeoffset:

You can find this on SQl Server 2008, 2008 R2, and 2012. This holds date along with time of a day and contains timezone awareness based on 24 hour clock.

Syntax:
datetimeoffset[(fractional seconds precision)]

For Example:
Declase @datetimeoffset datetimeoffset) = '12-13-25 12:32:10 +05:45'
Declare @time time(3) = @datteimeoffset
Declare @date date = @datetimeoffset

Select @datetimeoffset 
Output: 2025-12-13 12:32:10.0000 +05:45 
Select @time
Output: 12:32:10.000
Select @date
Output: 2025-12-13

Date Time Function:

GETDATE:

It helps to get database system timestamp with fractional seconds as a datetime without time zone offset.

Syntax:
GetDate()

Example:
Select GetDate()

Current_TimeStamp:

Same as GetDate().

GETUTCDATE:

Same as GetDate() but the value represents the current UTC time.

SYSDATETIME

SYSDATETIMEOFFSET

SYSUTCDATETIME

DateName:

It returns a character string for the specified part of the date.

Syntax:
DateName(datepart, date)

For Example:
Declare @date datetime
set @date = '2012-12-30 12:25:10.013'

Select @date
Output: 2012-12-30

Select datename(year, @date) 
Select datename(yy, @date) 
Select datename(yyyy, @date) 
Output: 2012

Select datename(quarter, @date)
Select datename(qq, @date)
Select datename(q, @date)
Output: 4

Select datename(month, @date)
Select datename(mm, @date)
Select datename(m, @date)
Output: December

Select datename(dayofyear, @date)
Select datename(dy, @date)
Select datename(y, @date)
Output: 364

Select datename(day, @date)
Select datename(dd, @date)
Select datename(d, @date)
Output: 30

Select datename(week, @date)
Select datename(ww, @date)
Select datename(wk, @date)
Output: 53

Select datename(weekday, @date)
Select datename(dw, @date)
Select datename(w, @date)
Output: Monday

Select datename(hour, @date)
Select datename(hh, @date)
Output: 12

Select datename(minute, @date)
Select datename(mi, @date)
Select datename(n, @date)
Output: 25

Select datename(second, @date)
Select datename(ss, @date)
Select datename(s, @date)
Output: 10

Select datename(millisecond, @date)
Select datename(ms, @date)
Output: 13

Datepart:

It is similar to datename but it returns as an integer.

Syntax:
datepart( datepart, date)

For Example:
Declare @date datetime
set @date = '2012-12-30 12:25:10.013'

Select @date
Output: 2012-12-30

Select datepart(year, @date) 
Select datepart(yy, @date) 
Select datepart(yyyy, @date) 
Output: 2012

Select datepart(quarter, @date)
Select datepart(qq, @date)
Select datepart(q, @date)
Output: 4

Select datepart(month, @date)
Select datepart(mm, @date)
Select datepart(m, @date)
Output: 12

Select datepart(dayofyear, @date)
Select datepart(dy, @date)
Select datepart(y, @date)
Output: 364

Select datepart(day, @date)
Select datepart(dd, @date)
Select datepart(d, @date)
Output: 30

Select datepart(week, @date)
Select datepart(ww, @date)
Select datepart(wk, @date)
Output: 53

Select datepart(weekday, @date)
Select datepart(dw, @date)
Select datepart(w, @date)
Output: 2

Select datepart(hour, @date)
Select datepart(hh, @date)
Output: 12

Select datepart(minute, @date)
Select datepart(mi, @date)
Select datepart(n, @date)
Output: 25

Select datepart(second, @date)
Select datepart(ss, @date)
Select datepart(s, @date)
Output: 10

Select datepart(millisecond, @date)
Select datepart(ms, @date)
Output: 13

Day

It is similar to Datepart(Day, date) and if date contains only time then it will return value as 1.

Syntax: Day(date)

Month

It is similar to Datepart(Month, date) and if date contains only time then it will return value as 1.

Syntax: Month(date)

Year

It is similar to Datepart(Year, date) and if date contains only time then it will return value as 1900.

Syntax: Year(date)

IsDate

It checks whether the given expression is valid datetime/ date/ time. It returns value as 0 or 1. If the value is true, it returns 1 else 0.

Syntax: Isdate(expression)

For Example:
Select IsDate('2012-12-25 12:30:25.1234567') Returns 1
Select IsDate(NULL) Returns 0

DateAdd

It returns a specified date with the specified number interval added to specified datepart of that date. datepart cannot be User-defined variable or its equivalents.

Syntax:
DateAdd(datepart, number, date)

For Example:

Declare @date datetime
Set @date = '2012-12-30 12:30:15.013'

Select dateadd(year, 1, @date)
Select dateadd(yy, 1, @date)
Select dateadd(yyyy, 1, @date)
Output: 2013-12-30 12:30:15.013

Select dateadd(month, 1, @date)
Select dateadd(mm, 1, @date)
Select dateadd(m, 1, @date)
Output: 2013-01-30 12:30:15.013'

Select dateadd(quarter, 1, @date)
Select dateadd(qq, 1, @date)
Select dateadd(q, 1, @date)
Output: 2013-03-30 12:30:15.013'

DateDiff

It returns difference between startdate and enddate based on datepart.

Syntax:
DateDiff( datepart, startdate, enddate)

For Example:

Declare @date1 datetime = '2012-12-30 12:30:10.123'
Declare @date2 datetime = '2013-05-25 10:15:20.123'

Select datediff (year, @date1, @date2)
Output: 1

April 19, 2014

Difference between SQL Server and PostgreSQL

I had a chance of working with PostgreSQL and SQL Server at the same time to pull bulk data from PostgreSQL to SQL Server using Linked Server and OPENQUERY function of SQL Server for report writing in SSRS. Though I don't have to work much with PostgreSQL, I thought of comparing these two database server to clear my basic understanding of how PostgreSQL differs from SQL Server.
  • Operating System
SQL Server only runs on Windows Desktop/Server whereas PostgreSQL runs on Windows Desktop/Server, Linux, Unix, Mac
  • Licensing
SQL Server is commercial closed source and have to pay to use it whereas PostgreSQL is BSD open source database management system that allows you to do anything for making your own commercial product.
  • Views
SQL Server supports automatically updatable and incexable views whereas for PostgreSQL, you have write rules against views in order to update views.
  • Limit..offset
SQL Server does not have Limit..offset but it supports TOP and ANSI compliant ROW_Number () Over (Order by column_name) whereas PostgreSQL does not support TOP but support Limit..offset and also supports the ANSI compliant Row_Number() over (order by column_name)

  • Concatenation
SQL Server use plus operator for concatenation. Example: Select HouseNo + ', ' + Line1 + ', ' + County as HomeAddress whereas in PostgreSQL query  use pipe sign and previous example becomes Select HouseNo || ', ' || Line1 || ', ' || County as HomeAddress
  • Like Statement
SQL Server by default LIKE statements are not case sensitive whereas in PostgreSQL LIKE statements are case sensitive. To make case insensitive we can use ILIKE  but in some cases we may get some issue because its not ANSI compliant and ODBC does not exp[ose it. Another way to make case insensitive is by simply making both column data and variable (value to compare) to lower case and compare it. You have to use LOWER function. 

In the latest version of PostgresSQL, citext module is integrated which helps to provide case-insensitive character sting type.

According to postgresql.org, "The citext data type allows you to eliminate callse to lower in SQL queries, and allows a primary key to be case-insensitive. citext is locale-aware, just like text, which means that the matching of upper case and lower case characters is dependent on the riles of the database's LC_CTYPE setting. Again, this behavior is identical to the use of lower in queries. but because it's done transparently by the data typoe, you don't have to remember to do anything special in your queries."

  • Access data from Remote Database
SQL Server can pull data from remote database server using LINKEDSERVER and OPENQUERY or SQL Server Four Part Name whereas PostgreSQL can pull using Dblink but is less elegant and efficient than SQL Server.

April 2, 2014

Union and Union all - SQL Server

I have come across many times use of this two function Union and Union all, so I thought of writing difference between these two operators.

Both Union and Union all combines rows of 2 or more than two tables into one. Main difference between Union and Union all is Union helps to remove duplicate rows. It takes time to execute the query because it has to perform distinct operation But, Union all pulls all the rows from all the tables so it takes less time to execute the query.

Example:

SELECT * FROM TABLE1

ID
1
2
3

SELECT * FROM TABLE2

ID
2
3
4

SELECT * FROM TABLE1
UNION
SELECT * FROM TABLE2

ID
1
2
3
4

SELECT * FROM TABLE1
UNION ALL
SELECT * FROM TABLE2

ID
1
2
3
2
3
4 

Sorting the result of Union and Union all

SELECT * FROM TABLE1
UNION ALL
SELECT * FROM TABLE2
ORDER BY ID

ID
1
2
2
3
3
4

March 31, 2014

SQL Server - Convert Seconds into Hours:Minutes:Seconds format

This query can help to change seconds into Hours:Minutes:Seconds format.

DECLARE @TimeinSecond INT
SET @TimeinSecond = 8000 -- Change the seconds
SELECT RIGHT('0' + CAST(@TimeinSecond / 3600 AS VARCHAR),2) + ':' +
RIGHT('0' + CAST((@TimeinSecond / 60) % 60 AS VARCHAR),2)  + ':' +
RIGHT('0' + CAST(@TimeinSecond % 60 AS VARCHAR),2)

Output:
02:13:20

March 15, 2014

SQL Server - Split the string

To split the string in SQL Server, you can use CHARINDEX and SUBSTRING function.

CHARINDEX function helps to find the position of the character in the string. This function accepts two arguments. First one is an expression to find and second is an expression to search, based on these argument it will return the first index position of the character of the first argument.

Syntax:

CHARINDEX( expressiontofind, expressiontosearch, start_location)

start_location is optional, if provided then search should start from there otherwise from start.

For example:

SELECT CHARINDEX(' ','Avesh Dhakal') --Result: 6

SELECT CHARINDEX(' ','Avesh Dhakal, 4') -- Result: 6

SELECT CHARINDEX(' ','Avesh Dhakal, 7') -- Result: 0

SUBSTRING function returns the part of the character of the given expression. This function accepts three arguments.

Syntax:

SUBSTRING(expression, start, length)

expression: Its a given string.

start: Its a start position. If the position is less than one, the returned character will begin from first position of the string.

length: It specifies how many character of the string will be returned.

For Example:

SELECT SUBSTRING('Avesh Dhakal',3,6) -- Result: esh Dh


Now, using CHARINDEX and  SUBSTRING function, lets do an example to split the string:

For Example:

DECLARE @String VarChar(255) = 'Hello! How are you?'

SELECT SUBSTRING(@String , CHARINDEX('!', @String ) + 2, 50)

Result: How are you?

March 2, 2014

Linked Server CRUD Operation

Using OPENQUERY


OPENQUERY function can be used to execute the pass-through queries on the linked server.

For example: Let postgres be the linked server name.

Select Statement

select * from openquery('postgres', 'select * from dbo.table')

Update Statement

update openquery(postgres, 'select column_name from table where id = 1')
set column_name = 'abc'

Insert Statement

insert openquery(postgres, 'select column_name from table')
values ('abc')

Delete Statement

delect openquery(postgres, ''select column_name from table where column_name = ''abc''')

Without Using OPENQUERY


There is also another method to run the pass-through queries on the linked server without using OPENQUERY.

LINKEDSERVERNAME.DATABASENAME.SCHEMANAME.TABLENAME

Instead of above select statement, it can be written as:

select * from postgres.databasename.dbo.table

In order to execute the query without using OPENQUERY, you need to query through Remote Procedures Calls (RPC ). This RPC can be found while creating the linked server on the server option.

RPC is to allow remote procedures calls from linked server.
RPC Out is to allow remote procedures calls to linked server.

RPC and RPC Out both should be true.

Even when RPC and RPC Out is false. You can execute the query without using OPENQUERY.

Example:

select * from postgres...dbo.table

It will run successfully.

March 1, 2014

Linked Server for a Remote Database Server

Linked server helps SQL Server Database Engine to execute the query against OLE data sources outside of the instance (Remote Server) of SQL Server.

To add a Linked Server, it can be done either using the GUI interface or the so_addlinkedserver command.

Suppose, we want to create Linked Server for Postgres Database Instance. For this:

1) Firstly, we need to create ODBC Data Source Administrator aster installing the Postgres ODBC Driver.

Go to Run. Type "odbcad32" - System DSN - Add-  Then configure



2) Now, to create Linked Server, you need to go to SQL Server Management Studio - expand the "Server Objects" folder, and then right click on the "Linked Server".

In the below screen shot, datasoure name should be PostgresSQL64W, as given above in the odbc connector. Here, give the linked server name, choose provider and give the datasource name. The provider for remote database will be Microsoft OLE DB Provider SQL Server. Provider string should be blank. Catalog is optional. (Default database can be used).


3) In detail, security and server option: http://msdn.microsoft.com/en-us/library/ff772782.aspx





February 26, 2014

SQL ORDER BY clause

We all know that order by keyword is used to sort the output of the query either in ascending or descending order. By default, order by keyword sorts the result in ascending order. In order to sort the result in descending order all we need to use is "desc" keyword.

Ascending Order Example:

select column_a from table order by column_a

Result:
coloumn_a
1
2
3
4
5

Descending Order Example:

select column_a from table order by column_a desc

Result:
column_a
5
4
3
2
1

Manipulating Order By keyword:

Above, we have seen an example how to sort the specified column in an ascending or descending order. But, in the real world projects we face such situation where we have to sort in the specific way. Let's take above example. If we would like to see number 3 of column_a always on the top of the result. Then ,

select column_a from table
order by case when column_a = '3' then '1' else column_a end asc

Result:
column_a
3
1
2
4
5

February 25, 2014

SQL Server Reporting Service Subscription (Report Automation using Windows File Share)

For the report automation, the report can be delivered using two options:
1) Windows File Share
2) Email Subscription


Here, we will discuss using Windows File Share.



In the file name, it can be edited and make whatever is needed. Here, if you added after the file name @timestamp then after dumping the file you will be able to see date and time when the file was dumped.

The path where the reports are dumped must be provided in UNC (Uniform Naming Convention) format. It is \\\ .

Credentials to access the file share must be provided.

The Render format must be selected as needed.

Schedule the report as needed by clicking Select Schedule.



Parameter must be passed as required.

February 24, 2014

How to get SSRS Usage Report

In my personal experience, there will be number of report published in reporting services for the business analysis. Though, SQL Server Reporting Service has made life easy to pull the data for the analysis purpose, developer has to work hard to do information engineering and to make worthy looking for doing the analysis. There will be number of parameterized KPI reports which keeps changing and accordingly new reports will be deployed in the report server. It becomes necessary to know what reports and how often reports are being used.

As there will be many reports deployed in the report server, it is mandatory to know execution time of each report and at what time the report server takes maximum amount of time to run the report. Based on above mentioned things maintenance of the report can be done.

SQL Server Reporting Services captures all the necessary data related to the each report execution on a table called "ExecutionLog" in the ReportServer database. There is another table named "Catalog" which is associated with the table ExecutionLog, these two table will help to get the information of each report. You can get as much as information you want to get using these tables.

Below is the example:

SELECT 
Cat.Name AS Report_Name, 
AVG(Ex.TimeDataRetrieval) AS AVGQueryTime, 
MIN(Ex.TimeDataRetrieval) AS MINQueryTime, 
MAX(Ex.TimeDataRetrieval) AS MAXQueryTime, 
STDEV(Ex.TimeDataRetrieval) AS SDQueryTime, 
AVG(Ex.TimeProcessing) AS AVGProcessTime, 
MIN(Ex.TimeProcessing) AS MINProcessTime, 
MAX(Ex.TimeProcessing) AS MAXProcessTime, 
STDEV(Ex.TimeProcessing) AS SDProcessTime, 
AVG(Ex.TimeRendering) AS AVGRenderTime, 
MIN(Ex.TimeRendering) AS MINRenderTime, 
MAX(Ex.TimeRendering) AS MAXRenderTime, 
STDEV(Ex.TimeRendering) AS SDRenderTime, 
AVG(Ex.ByteCount) AS AVGReportrSize, 
AVG(Ex.[RowCount]) AS AVGRowCount, 
MIN(Ex.[RowCount]) AS MINRowCount, 
MAX(Ex.[RowCount]) AS MAXRowCount, 
COUNT(Ex.ReportID) AS [TotalExecutions],
Ex.Format, Ex.TimeStart
FROM ExecutionLog as EX 
INNER JOIN Catalog as Cat ON Ex.ReportID = Cat.ItemID 
GROUP BY Cat.Name, Ex.Format, Ex.TimeStart

February 20, 2014

SQL Server - Combining multiple row data into single row data with the help of comma

I had to prepare a report where multiple row data should be merged/combine into single row using separated by comma. I found the best solution to solve this is using STUFF function and FOR XML clause of SQL Server.

According to the Microsoft, "STUFF function helps to inserts a string into another string. First of all it deletes a specified length of characters of the first string at the start position and then inserts the second string into the first string at the start position."

Syntax:

STUFF(character_expression, start_length, replacewith_expression)

For Example:

SELECT STUFF('avesh', 2, 3, 'aastha')

Result:
aaasthah

Using FOR XML in select statement returns the result as in XML format where as select query returns results as a rowset. There are different modes in a FOR XML clause, here we will use PATH mode.


DECLARE @Customer TABLE (CustomerId int, CustomerName varchar(50), BranchName varchar(50))

INSERT INTO @Customer VALUES (1, 'Kale', 'drn branch')
INSERT INTO @Customer VALUES (1, 'kale', 'brt branch')
INSERT INTO @Customer VALUES (3, 'Sailo', 'ktm')

Without using STUFF function:

SELECT CustomerId, CustomerName, BranchName = (SELECT ',' + BranchName
    FROM @Customer AS x2 WHERE CustomerId = x.CustomerId    
     FOR XML PATH(''))
FROM @Customer AS x
GROUP BY CustomerId, CustomerName

ORDER BY CustomerID

Result:



Using STUFF function:

SELECT CustomerId, CustomerName, BranchName = STUFF((SELECT ',' + BranchName
    FROM @Customer AS x2 WHERE CustomerId = x.CustomerId    
     FOR XML PATH('')), 1, 1, '')
FROM @Customer AS x
GROUP BY CustomerId, CustomerName
ORDER BY CustomerID

Result: 

February 15, 2014

SQL Server - Four Part Name

The name of a database object can be a four-part name in the following form:

server_name.[database_name].[schema_name].object_name - Four part name

server_name.[database_name]..object_name - Schema name is omitted.

server_name..[schema_name].object_name - Database name is omitted.

server_name...object_name - Database and schema name is omitted.


server name:It can be either remote server name or linked server name.

Example:
SELECT * FROM DatabaseName.Master.dbo.sysobjects

database_name: If the object is at local instance of SQL Server then it will be SQL Server database name, or if the object is at linked server then it will be OLE DB catalog.

Example:
SELECT * FROM Master.dbo.sysobjects

schema_name: It will be the name of the schema that contains the object if the object is in a SQL Server database, or if the object is in linked server then it will be OLE DB schema name.

Example:
SELECT * FROM dbo.sysobjects

object_name: It is the name of the object.

Example:
SELECT * FROM sysobjects

February 12, 2014

Stored Procedure - QUOTED_IDENTIFIER ON/OFF, ANSI_NULL ON/OFF and NOCOUNT ON/OFF

Most of the SQl Developer uses these options while writing TSQL Stored Procedure/User defined function/Trigger. QUOTED_IDENTIFIER is implemented at the parse time where as ANSI_NULL and NOCOUNT is implemented at run time.

QUOTED_IDENTIFIER identifies the data based on single and double quotes. When the setting is set on, all the string that is defined inside double quotes will be interpreted as T-SQL object identifier like table name, procedure name, column name etc. This quoted identifier do not have to follow T-SQL rules and will be reserved keywords and can include characters not generally allower in TSQL identifiers. All the character set that is defined in the single quotes will be interpreted as a literal.

When QUOTED_IDENTIFIER is off, the string that is inside single or double quotation mark will be interpreted as literals.

For Example:

SET QUOTED_IDENTIFIER ON
CREATE TABLE "SELECT" ("TABLE" INT) -- SUCCESS

GO

SET QUOTED_IDENTIFIER ON
SELECT "SOMETEXT" AS VALUE -- FAILS BECAUSE SOMETEXT IS NOT A LITERAL


SET QUOTED_IDENTIFIER OFF
CREATE TABLE "SELECT" ("TABLE" INT) -- FAIL

GO

SET QUOTED_IDENTIFIER OFF
SELECT "SOMETEXT" AS VALUE -- FAILS BECAUSE "SOMETEXT" IS  A LITERAL

ANSI_NULL helps to handle the comparison operations with NULL values. When ANSI_NULL is ON, any comparison on a select statement using the equals (=) and not equals (<>) operators with NULL values returns zero rows even if there are null values in the column. it follows the ISO standard so you need to use IS NULl and IS NOT NULL so as to do comparison with NULL values.

When ANSI_NULL is OFF, then amu compasion with NULL value using the equals (=) and not equals (<>) operators with NULL values returns rows with NULL value for the equals operator and not NULL values for not equals operator. It does not follow ISO standard and returns true and false.

For Example:

SET ANSI_NULLS ON
IF NULL IS NULL
   PRINT 'TRUE'
ELSE
   PRINT 'FALSE'

Result: TRUE

SET ANSI_NULLS ON
IF NULL = NULL
   PRINT 'TRUE'
ELSE
   PRINT 'FALSE'

Result: FALSE

SET ANSI_NULLS OFF
IF NULL = NULL
   PRINT 'TRUE'
ELSE
   PRINT 'FALSE'

Result: TRUE

NOCOUNT helps to stop the count of rows affected during the execution of stored procedures.

For Example:

CREATE PROCEDURE USP_NOCOUNT

SET NOCOUNT OFF

AS
BEGIN

    DECLARE @TEMP TABLE(ID INT)
 
    INSERT INTO @TEMP VALUES(1)
    INSERT INTO @TEMP VALUES(2)

    SELECT * FROM @TEMP

END

Then, execute the stored procedure  USP_NOCOUNT

Exec USP_NOCOUNT

After execution, you will get message twice "1 row(s) affected".

CREATE PROCEDURE USP_NOCOUNT

SET NOCOUNT ON

AS
BEGIN

    DECLARE @TEMP TABLE(ID INT)
 
    INSERT INTO @TEMP VALUES(1)
    INSERT INTO @TEMP VALUES(2)

    SELECT * FROM @TEMP

END

Then, execute the stored procedure  USP_NOCOUNT

Exec USP_NOCOUNT

After execution, you will get message "Command(s) completed successfully".

February 10, 2014

SQL Server - Joins

INNER JOIN

This join will return the row that matches in both tables.

Select * from Table1 as tbl1
Inner Join Table2 as tbl2 on tbl1.column_name = tbl2.column_name

RIGHT OUTER JOIN

This join will return all the rows from right table along with the rows that matches in both tables. All the columns from left table which does not find match returns null value.
Select * from Table1 as tbl1
Right Outer Join Table2 as tbl2 on tbl1.column_name = tbl2.column_name

LEFT OUTER JOIN

This join will return all the rows from left table along with the rows that matches in both tables. All the columns from right table which does not find match returns null value.
Select * from Table1 as tbl1
Left Outer Join Table2 as tbl2 on tbl1.column_name = tbl2.column_name

FULL OUTER JOIN

This join returns all rows from both tables, matching up the rows wherever a match can be made and placing NULLs in the places where no matching row exists.
.

February 5, 2014

SQL Server - Concatenation

To concatenate two or more character or binary strings, + sign string operator should be used.

Syntax:

expression + expression

For Example:

SELECT 'Avesh' + ' Dhakal'

Result: Avesh Dhakal

When you need to concat two integer than first of all you need to conver/cast them into string and then concat them.

For Example:

SELECT CAST(1 AS VARCHAR(10)) + ' ' + CAST(2 AS VARCHAR(10))

Result: 1 2

January 20, 2014

Modelling a Data Warehouse

When designing a model for a data warehouse we should follow standard pattern, such as gathering requirements, building credentials and collecting a considerable quantity of information about the data or metadata. This helps to figure out the formation and scope of the data warehouse. This model of data warehouse is known as conceptual model. General elements for the model are fact and dimension tables. These tables will be related to each other which will help to identity relationships between them. This design is called a schema and is of two types: star schema and snowflake schema. The designing of these schema falls under physical model design (Jones and Johnson, 2010).
Before designing the physical model, logical model should be designed this is based on the conceptual model. Logical model mainly focuses on granularities arrangement, data refinement and the definition of logical relation pattern (Fu-shan, 2009).
Granularity refers to “the level of detail or summarisation of the units of data in the data warehouse”. The low level of granularity contains high level of detail and the high level of granularity contains low level of detail. This is one the major issue of data warehouse design as it affects greatly to the data and its query (Inmon, 2005). A diverse category of analytical processing uses various levels of granularity. The level of granularity affects database performances. Data warehouse consists of several combinations and details of data commonly referred as granularity. If the DW has many levels of explorable data layers, it is supposed to be more granular. Generally conventional  database operations are categorized as low granular, Whereas modern data warehousing operations are required to be more granular because of the needs of exploring data in several intensity, Thus in a DW environment, granularity  directly represents the richness of data quality and consequently establish the intensity of database queries (Fu-shan, 2009).
Star Schema:
It consists of single fact table at the centre linked with a number of dimension tables. OLAP focuses in the fact table and data related to facts are stored in dimension table. The dimension tables will not be in normalised form (Wang et al., 2005). The fact table contains the primary key of all dimension tables. The advantage of implementing star schema is that to get information we need simple join queries. But the disadvantage is that for complex systems, it becomes somewhat complicated to read and query a massive amount of data (Jones and Johnson, 2010).

Snowflake Schema:

It’s a modification of star schema. The dimensional hierarchy is presented clearly by normalisation of the dimension tables which will be used for drill-down and roll-up operations. Its advantage is that maintaining the dimension tables will be easy (Wang et al., 2005).
The implementation of these schemas depends on specific system requirement. Difference is in performance and usability.
References:
1) Jones, J. and Johnson, E. (2010). Data Modelling for Business Intelligence with Microsoft SQL Server: Modeling a Data Warehouse. CA Erwin, pp. 1-9. [Online] Available from: http://www.ca.com/files/whitepapers/ca-erwin-db-modeling-bus-intel-....
2) Jones, J. and Johnson, E. (2010). Beyond the data model: designing the data warehouse. CA Erwin, pp. 1-9. [Online] Available from: http://www.ca.com/files/whitepapers/part3_beyond_dmd_data_warehouse...
3) Fu-shan, W. (2009). Application Research of Data Warehouse and its Model design. The 1st International Conference on Information Science and Engineering. pp. 798-801
4) Inmon, W. H. (2005). Building the data warehouse. Fourth Edition. Wiley Publishing, Inc., Indiana: Indianapolis.
5) Wang, J., Chen, T., Chiu, S. (2005). Literature Review on Data Warehouse Development. IACIS Pacific 2005 Conference Program, National Chengchi University, Taiwan. pp. 987-994.

January 16, 2014

Dimensional Modelling

There isn’t any specific standard to model data warehouse. It can be built either using the “dimensional” model or the “normalised” model methodologies. Normalised model normalises the data into third normal form (3NF) whereas dimensional model collects the transactional data in the form of facts and dimensions. Normalised model is easy to use as we can add related topics without affecting the existing data. But one must have good knowledge of how data is associated before performing specific query, so it might be difficult to handle. Reporting queries may not execute as well because massive numbers of tables may be involved in each query. Dimensional model is very efficient to use for non experts and performs pretty well as data is classified in a logical way and similar types of data are stored together. But while adding new topics whole warehouse must be reprocessed (Jones and Johnson, 2010).
Dimensional model is designed to optimise decision support query function in relational databases, where as normalised model is designed to eliminate redundancy problem of the data model, retrieving data which contains identifiers and therefore optimise online transaction processing (OLTP) performance (Firestone, 1998). Therefore it can be said dimensional model is the best modelling method in data warehousing. 
Data Cube
OLAP stores data in arrays which are the logical presentation of business dimensions. Multidimensional array represents intelligence of data elements relationships because analyst’s views depend on it. This multidimensional array data model is called “Data Cube” (Kirkgiize et al., 1997). It consists of facts and dimensions instead of rows and columns as in relational data model. Facts are the accurate or numeric data of business activity and dimensions are the sets of attributes that put facts into context (Wang, Chen, Chiu, 2005). Dimensions are interconnected in hierarchies, for example, city, state, region, country and continent. Figure 1 shows three dimensional views data cube of sales data (Kirkgiize et al., 1997).
In data cube each cell contains one or several values called measures or metrics. It helps to analyse aggregated facts and the level of detail is directly proportional to number of dimensions in the cube. Here axes represents dimension and the space represents facts (Wang et al., 2005). Above shown figure of data cube contains three dimensions namely Time, Geography and Product. Each cell consists of (T, P, G) and business measure is the total sales. Here the amount of product P along with total sales sold in geography G in time period T is stored. Figure 2 shows hierarchy of dimensions. 

Features of data cube:

Slice and dice

The user can generate the exact view of the data required with the help of data cube. This process is called slice and dice because the data is evaluated according to a subset alignment. The subset can also be rotated and figured as a parent one. For example, to find out the sales in a particular place in the specific time, slice and dice operation can be operated in the cube. This will help to choose arrangement along each dimension, it may be time, geography or product according to user needs (Kirkgiize et al., 1997).

Drill-down and roll-up

Data cube may contain hierarchical dimensions. This hierarchical dimension provides data manipulation and detailed analysis for various levels of the dimensions. Moving up and down in a hierarchy is known as roll up and drill-down respectively. After choosing the level of dimension, it can be sliced and diced. According to user need, they can drill-down and roll-up to view the data either in region wise, in product wise or in city wise level (Kirkgiize et al., 1997).
References:
1) Jones, J. and Johnson, E. (2010). Data Modelling for Business Intelligence with Microsoft SQL Server: Modeling a Data Warehouse. CA Erwin, pp. 1-9. [Online] Available from: http://www.ca.com/files/whitepapers/ca-erwin-db-modeling-bus-intel-....
2) Jones, J. and Johnson, E. (2010). Beyond the data model: designing the data warehouse. CA Erwin, pp. 1-9. [Online] Available from:http://www.ca.com/files/whitepapers/part3_beyond_dmd_data_warehouse...
3) Firestone, J. M. (1998). Dimensional Modeling and E-R Modeling In The Data Warehouse. Dimensional Modeling and E-R Modeling In The Data Warehouse, White Paper No. Eight. pp. 1-9.
4) Kirkgiize, R., Katie, N., Stolba, M. Tjoa, A. M. (1997). A Security Concept for OLAP. Proceedings of the 8th International Workshop on Database and Expert Systems Applications (DEXA '97), Institute of Software Technology (IFS). pp. 620-626.
5) Wang, J., Chen, T., Chiu, S. (2005). Literature Review on Data Warehouse Development. IACIS Pacific 2005 Conference Program, National Chengchi University, Taiwan. pp. 987-994.