Tuesday, December 15, 2015

Introduction to Hadoop

Dear Friends,

Couple of months back I've published a post on SQL Server 2016 new features here.

Meanwhile, let me introduce you to Hadoop. We will learn this as a series of inter-related post. So don't miss any post in between and read it serially. Let's make it fun and interesting to learn Hadoop.

So, lets understand what are the formats of data that we handle in real word.
  • Flat File
  • Rows  and Columns
  • Images and Document
  • Audio and Video
  • XML Data and many more......
Big Data is an ocean of data which an organization stores. These data come in three V's i.e. Volume, Velocity and Variety.

Now-a-days huge Volume of data are getting generated by many sources such as Facebook,Whatsapp, E-commerce sites, etc, etc, etc. These huge volume of data are getting generated with high Velocity, can say it is multiplying every seconds, every minute, every hour. Along with the huge Volume and high Velocity numerous Variety of data is generated in different forms.

These data can be in any format i.e. structure, semi-structure as well as unstructured. Data stored in the form of row and column can be well defined as structured data whereas data in form of document, image, sms, video, audio,etc can be categories into unstructured and data in html or in XML format can be semi-structure data.

Q. I am sure you must be thinking that then how does a RDBMS handles these kind of unstructured or semi-structure data in there Database?

A. Well, to handle these kind of data's we have special data type such as Varbinary(Max), XML. Drawback of this is, if we are storing an image, it is stored in binary format within the database; whereas actual image is stored in Filestream or the Server itself. Hence there is an performance impact during storing and retrieving Petabyte of data's.

Moreover to this, Big Data is not just about maintaining and growing the data year on year, but it is also about how you manages these data's to make an informative decision. Data in Big Data can also comes in various complex format, to manage and process these type of data we need large set of cluster servers.
BIG DATA & HADOOP

With this introduction to Big Data, now let me introduce you to Hadoop. 

Hadoop is a large set of cluster servers which is built to process large set of data. It has two main core component i.e. 'Hadoop MapReduce' (Processing Part) and 'Hadoop Distributed File System' (Storage Part). Hadoop project comes under Apache and that is why it is called as 'Apache Hadoop'. The idea behind these two core component came into existence when Google has released there two white paper of there project on 'MapReduce' and 'Google File System (GFS)' in the year 2004.
Hadoop was created by Doug Cutting in 2005. Cutting, He was working at Yahoo! at the time he build the software, it was named after his son's toy elephant.

Wikipedia defines Hadoop as "an open-source software framework written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware"

Hadoop is an open source framework available in free as well as commercial use under Apache license. It allows distributing and processing of dataset across the large Cluster set. On top of this, there are lot more application build by other organisation who use Hadoop or continuously work on this product which all comes under 'Hadoop Ecosystem'. Check here to find the list of projects under Hadoop Ecosystem. As we move further we will see post on the important projects under Hadoop Ecosystem.

Apache Hadoop Architecture consist of following components:
  • Hadoop Common: It contains the libraries and other utilities needed by other module of Hadoop.
  • Hadoop Distributed File System (HDFS): It is cluster of Servers with commodity storage which is used for data storage across the cluster.
  • Hadoop YARN: This component is used for Job scheduling and Resource management in Cluster.
  • Hadoop MapReduce: The processing part of the data is done by this component. 
At least now, I can consider that you are having a fair enough idea about this technology. But how can or who is the best person to get into Hadoop??

Technical answer for that would be those who are interested to learn this technology can get in two ways:
  • As a Developer
  • As a Administrator
  1. As a Developer: Hadoop is a framework which is built in JAVA language. So having JAVA background can get easy access to become a Hadoop Developer. Since the growing popularity of Hadoop, now a days this is the most common designation you can find in Job sites. 
  2. As a Administrator: Most organisation with Hadoop installation prefer for a Part time or a
    Hadoop Administration

    Full time Administrator to manage there Hadoop Clusters. It is not compulsion that the admin should have the knowledge of JAVA to learn this technology. Indeed! they should have some basics for troubleshooting. Candidate those who are having knowledge with Database Admin (SQL Server, Oracle, etc) background who already have troubleshooting, Server maintenance, Disaster Recovery knowledge are preferred or anyone with Network or Storage or Server Admin (Windows\Linux) skills  can be the other best choice. Here in this post it is mentioned in detail who suits best for Hadoop.     
Following might be the questions in your mind if we want to get start with Hadoop Admin:

  1. Do we need any DBA skills? Of course Yes; If we need to Admin the Hadoop Cluster (Maintaining, Monitoring, Configuration, Troubleshooting,etc). 
  2. Do we need to learn Java? Yes; At least some basics to understand the Java errors while troubleshooting any issue.
  3. Do we need to understand Non-RDBMS? Yes; Hadoop understand both SQL and NoSQL (Not only SQL). So having knowledge on Non-RDBMS product is most important.
  4. Do we need to learn Linux too? Yes;  at least the basics.
In our next post we will see the concept of HDFS (Hadoop Distributed File Structure).

Interested in learning SQL Server Clustering check here. Stay tuned for many more updates...

Keep Learning and Enjoy Learning!!!

Monday, November 2, 2015

Error - While Putting Database from Single to Multi User Mode

Dear Friends,

Click here to check how to start the SQL Server without TempDB Database. Let's learn what to do or how simple it can be to change the Database Mode from Multi User to Single User Mode or vice-versa?

Indeed! it is simple with the following command:

a. Alter Database Out set Single_User 
b. Alter Database Out set Multi_User

If we don't mention any termination clause like above it will run until the statements get completed.

Suppose there are n numbers of users connected to the Database and you executed the above command it will take hell lot of time to complete.
So rather, you can force disconnect the users to put the Database in Single User mode you have to fire the below command:

Alter Database out set Single_user with Rollback After 30 -- After 30 Seconds it will cancel and Rollback the Query

Alter Database out set Single_user with Rollback  Immediate -- It will immediately cancel and Rollback the Query

Alter Database out set Single_user with No_Wait -- If  there is any incomplete transaction No_Wait will Error

But again it might get horror if the Databases is in Single User and you cannot access the Database because only one connection can be made at a time and just think that connection is taken by the system i.e. SQL Server.

In this situation you are locked out, reason you cannot access the Database. Like you can see in the snapshot the Database Out is used by the system i.e. it is used by the Background process.

If you try to bring back the Database again in Multi User mode system will throw the following error:


Another possibility you can try would be detaching the Database. But when I tried detaching the Database, SQL Server will first kill the connections to the Database. Rather I should frame it as SQL Server will kill only the User connect and not the system connection.

After loads of struggle we were back to square one, that our Database was not getting back to Multi User mode. Seems it was like a deadlock between System SPID with Out Database. So we enabled the trace flag and checked the Error Log file. So following snapshot confirms that there was an deadlock:

DBCC TRACEON (1204,1222,-1)

Deadlock Graph
Deadlock Graph
If we try to Alter the Database to put it in Multi User mode. We will get a deadlock and since our Alter Statement is having low priority it will fail with the error message:


After random tries we tried the following command and it saved us. We have to set the deadlock priority high and then execute the Multi User mode query like below:

Set Deadlock_Priority High
Go
Alter Database Out Set Multi_User

So what this will do is it will set the Dead Lock priority High and Alter the Database to Multi User mode.

Guys do share the feedback about this article and of course about the blog too.

Want to start learning SQL Server Clustering?? Check here the three part series on SQL Server Clustering.

Do you know MS SQL Server 2016 is ready to launch?? Check here the two part series on New Features of SQL Server 2016.

Keep Learning and Enjoy Learning!!!

Tuesday, October 20, 2015

Error 233 - No process is on the other end of the pipe

Hi Guys,

As we have seen here an error related to SQL Server Restart. Today we will see another error i.e. Error 233 "A connection was successfully established with the server, but then an error occurred during the login process."

You might have faced this common issue in your DBA career and yes the solution is relatively simple.

So what I was doing, I was connecting to the SQL Server from Windows Login but during the login process it failed and prompted the below error message:

Error 233 in SQL Server
As I said this seems an common problem so there might be multiple workaround for this.

Now, for me the solution was to start the SQL Server management Studio (SSMS) under "Run as Administrator" (When I was fresher, I use to always wonder what difference it makes if I start any program under Run as Administrator?? Let's find out the reason for that).

Reason for this error was:

a. When we are running an application under "Run As Administrator" we get extra privileges which we may not have under the local user account.

b. So running the program under Run As Administrator will grant extra rights to the account (but it should have admin rights in Active Directory).

Once I started the SSMS under Run as Administrator and connected to SQL Server it succeeded.

There might be chances that this may not solve this  issue, check here to find more work around on the issue.

Do you know?? We should properly set the Auto Growth parameter in Database else it would end up with consuming extra space. Check here.

Thanks,
Vikas B Sahu

Keep Learning and Enjoy Learning!!!

Thursday, October 8, 2015

How to Start SQL Server Instance when TempDB is unavaliable

Hi Guys,

Here is my last post on error while starting the SQL Server Analysis Services (SSAS) and here I've shared earlier on a issue, I request you please read that before proceeding.

Today we will learn a fact on the internal of SQL Server Tempdb (which was little unusual for me before I actually faced it. So I must say it was a learning experience).

Following are some questions that comes to my mind when I think about TempDB:

1. Where and how exactly the TempDB Database is used?

2. What exactly happened when your TempDB is corrupted?

3. Have you ever though of restarting the SQL Server Services without TempDB? Is it possible to do that?

4. Is it possible to Backups (Full, Differential or Log) and Restore the TempDB Database?

Let's see the possible answers to these questions:

Q 1. Where and how exactly the TempDB Database is used?

Ans. i. It is a System Database. Database ID for TempDB is 4
        ii. It is use to store temporary Data. Whenever we create a hash table (#abc) it gets created in TempDB.  
        iii. It also used for Sorting of Data (for e.g. If we use Order by clause in a query it uses TempDB to sort).
        iv. Also used for Row-Versioning.
        v. Recovery model of the TempDB is "Simple". 
        vi. We cannot run DBCC CheckDB on TempDB. 
        vii. We cannot detach the TempDB files. (Rather I must say one cannot detach the system Database files) 
        viii. Most important it is re-created every time whenever SQL Server is restarted. By checking the "Last Creation Date" we can see when was the last SQL Server was restarted (can expect as an interview question).

2. What exactly happened when your TempDB is corrupted?

Ans. i. Since we cannot run DBCC CheckDB; we cannot identify if there is any corruption on TempDB Database.
        ii. The only way to get rid of  corruption is to restart the SQL Server Services. Since it will recreate the Database files again.
        iii. Without TempDB Database SQL Server Services cannot be started. 
        iv. Model Database acts as the template for all the user created Database as well as for TempDB. So if model Database is unavailable TempDB will not get create and hence SQL Server  Service cannot be started.


TempDB Cycle
TempDB Cycle

3. Is it possible to Backups (Full, Differential or Log) and Restore the TempDB Database?

Ans. i. Since TempDB is recreated every time when ever we restart the instance. We cannot Backup this Database.

4. Have you ever though of restarting the SQL Server Services without TempDB? Is it possible to do that?

Ans. (Now here is the question what made to write this post)

i. As said above, TempDB is restarted every time when a SQL Server is restarted. (This will create both the mdf and ldf files on the location present for TempDB in the Master Database). 

ii. But what if there is a disk level corruption or the defined location is not present due to xyz reason.

iii. Now here comes my question: Will the SQL Server Services will start??? So, answer for this is YES!!! it is possible (Till now it was NO for me). 

iv. If  you will start the SQL Services normally (assuming there is a disk level corruption on the disk were the TempDB was placed) the disk is unavailable you will get the below error message:


TempDB Error Log
TempDB Error Log
v. So basically starting the SQL Services normally will not help you. Therefore we have to start the SQL Server Instance in Single User Mode as well as Minimally Configured

Startup Parameter
Adding Parameter in Startup parameter
vi. You can do that by adding -m (Single User Mode) and -f (Minimal Configuration) parameter in the startup parameter in the SQL Server Instance.

vii. This will start your SQL Server in minimal configuration (of course TempDB files is created) placing the TempDB files i.e. Data file on the User Database Data file location and the Log file on the User Log file location.

viii. And by this way your TempDB files are placed on different location other than the actual location for TempDB files.

ix. But this different location is temporary and only if you will start your SQL Server Instance in minimal configuration. So it is important to change the TempDB location to another drive by using the following command:


USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE
(NAME = tempdev, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2\MSSQL\DATA\TEMPDB\Tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE
(NAME = templog, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2\MSSQL\DATA\TEMPDB\Tempdb.ldf');
GO

SELECT name, physical_name AS Location
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');
GO

Hope this post will give you a different angle on concept of TempDB. Further we will see so more findings on TempDB. Share your comment on this to discuss more on it. You might be aware on SQL Server 2016 check here for the new features in it.

Thank You !!!

Wednesday, October 7, 2015

Error - Analysis Services Failed To Start in SQL Server 2012

Hi Guys,

Read my last blog on Error while configuring SQL Server Cluster here. As a DBA, we cannot or say we should not limit our self to only Database Engine related stuff. Today let's learn something related to Analysis Service i.e. SSAS.

In one of the migration project, I wanted to migrate an SSAS Database which is called as "Cube" to another Server.

So firstly, I needed the backup of the Cube. See the following command to take the backup of the Cube.
SSAS Cube Backup Code
SSAS Cube Backup Code
Where Test is the Database Name and Test.abf is the Backup Name (Which will take the Backup in the default location).

Now, I wanted to restore this backup to my new Server. But when I tried to connect the SSAS, it got  failed due to SSAS was disabled. So tried to start the SSAS and it gave the following error:

SQL Server Analysis Services Stopped

And when I checked Windows Event Viewer it captured the below error message:

Windows Event Viewer Error
Windows Event Viewer
From the Error in the Event Viewer it is confirmed that it is related to something with permission issue. So what we commonly do is "Going to that particular drive\ Folder and give full right to the account which runs the SQL Server Analysis Services". But unfortunately it din't helped.

I started searching more in google related to this error and in most of the sites they will provide the same solution which we saw above. And the search continued and continued till I landed to the next solution. And solution was to check the msmdsrv.ini file (It is an Configuration file for AS).

Basically when we configure AS, we have to pass a location where Data and Log file will create, but in my case no such files where created. So what did was as follows:

1. Created two Folder's Data & Log on E:\ and F:\ drive respectively and gave full permission to it.

2. Open the msmdsrc.ini file (in notepad) from this location (default):

C:\Program Files\Microsoft SQL Server\MSAS11.MSSQLSERVER\OLAP\Config

msmdsrv.ini
msmdsrv.ini File
3. As we can see DataDir and LogDir tags which contains the location for the files; Change the path save it.

4. Then tried to re-start the AS and it was successfully started.

So the resolution for this seems simple. Want to know what is SQL Server Azure read my introduction post on Azure. Do comment if you like of dislike the page.

Thank You !!!
Keep Learning and Enjoy Learning !!!

Monday, October 5, 2015

SQL Server 2012 Clustering Errors

Hey Friends,

So till now we have seen the installation and configuration of SQL Server 2012 Clustering in our three part series as Part 1, Part 2 and Part 3 of SQL Server 2012 Clustering.


Now let us see few errors which I have faced during the Installation and Configuration of SQL Server 2012 Clustering:

1. Error: Cluster Service verification Failed
    Description: During one of the Server Migration i.e. Windows Server 2012 and SQL Server 2008       R2

Add a failover Cluster Node
Error-1
Solution: The solution was basically we have run the below command from the power shell:

Install-WindowsFeature -Name RSAT-Clustering-AutomationServer

Basically MsClus.dll library is by default disabled in Windows Server 2012. So by running the above command it enables it which is require for installation of SQL Server 2008 R2 Cluster.

I've got the solution of this from this msdn link.

2. Error: Cluster Service verification Failed
    Description: During one of the Server Migration i.e. Windows Server 2012 and SQL Server 2008       R2

Install a SQL Server Failover Cluster

Solution: 

Follow the below steps:

Copy  C:\Windows\Cluster\Clusres.dll TO C:\Windows\system32 and rename the file to W03a2409.dll

For this error, if you will search in internet you might get multiple solutions. I found this solution is the shortest and best way to resolve the error.

I've got the solution of this from this Microsoft link.

Keep Learning and Happy Learning!!!

Monday, September 14, 2015

SQL Server 2012 Clustering - Part 3

Hi Guys,

As we have seen my previous posts on how to configure the Windows Clustering and Setup the Quorum, now we will move forwards towards the final step to configure the SQL Server Cluster.

Installing and Configuring SQL Server Cluster is very much similar to installing a SQL Stand alone Server. Here, we will install and configure a two node Active - Passive Cluster and will see only the different pages occur while Installing and Configuring the SQL Server Clustering rest pages remains the same.

Let's start with the installation of SQL Server Cluster on Active Server. Following are snapshots for it:

1. Start the installation and Click "Installation" link on left hand side of the page. And then start the installation with "New SQL Server Failover Cluster Installation".


2. After couple of Next and Next's you will land to "Instance Configuration" page. Now here you have to Enter the "SQL Server Network Name" and the "Instance Name" (For named instance)

For Eg. If you enter here the Network Name  as "SQLServer_01" and Instance Name as "SQL01". Then the Server Name in the connectivity will be "SQLServer_01\SQL01".


3. Again after few Next's you will land to "Cluster Resource Group" page, were you have to enter a Unique name as a Cluster Resource Group Name. This will help in case there are multiple Cluster configuration.


4. Here we have to check all the Disks which you want to bring under that Cluster Group.


5. Enter an "Unique IP address" in the Address box after un-checking the DHCP.


6. So by now your SQL Server Cluster is installed on the Active Node. Now, you have to start Adding node on the Passive instance. As you can see the snapshot you have to start the installation and Click "Installation" link on left hand side of the page. But this time choosing "Add node to a SQL Server Failover Cluster".


Rest of the process are simple. So now the SQL Server Cluster is ready to deploy your Database.

Click here to refer Part 1 and Part 2 of this topic.

We will discuss the Errors while configuring the SQL Server Clustering in different post. So stay tuned.

Till then Keep Learning and Enjoy Learning!!!

Tuesday, September 8, 2015

Error 1807 - Database creation failed


Hi Friends,

While creating a Database an Error 1807 occurred as you can see the following snapshot.

Basically the create database failed due to an exclusive lock cannot be obtained on "Model" Database. 

Error 1807 - Database creation failed
Error 1807 - Database Creation Failed

Run the below set of queries to find out were exactly is Model Database is on use:

Use master
  GO
  IF EXISTS(SELECT request_session_id  FROM sys.dm_tran_locks
  WHERE resource_database_id = DB_ID('Model'))
   PRINT 'Model Database being used by some other session'
  ELSE
   PRINT 'Model Database not used by other session'


  SELECT request_session_id  FROM sys.dm_tran_locks
  WHERE resource_database_id = DB_ID('Model')


DBCC InputBuffer(52)

Kill the SPID and re-run the create database syntax. This time you will create a Database without any error.

Check out the error here while executing the Extended Stored Procedure.

Keep Learning and Enjoy Learning!!!

SQL Edition Upgrade Architecture Mismatch

Hi Friends,

I have recently posted an article on how to configure Quorum which was Part 2 post of three part series of SQL Server Clustering.

Recently I was checking my old mails and from that I found an error which we have faced a year back. So though of sharing with you guys. As my headline states "SQL Edition Upgrade Architecture Mismatch", yes it some what belongs to Edition Upgradation error.

Following is the snapshot of the error:

SQL Edition Upgrade Architecture Mismatch
Error - SQL Edition Upgrade Architecture Mismatch
Generally we do get this error on the landing page itself during the verification step if SQL Server found there is mismatch in the Bits i.e. x86 or x64 Bit. So the solution for this is very much simple.

Go to the Option tab on the left hand side in the initial page. Now you can see there are two radio buttons x86 and x64. By default its x64. You can change this as per your requirement.

You can see the following snapshot for reference:


After changing it you can proceed with the installation part.

If anyone of you planning for SQL Server Certification refer this post, it will give you an overview on the examination. Also you can comment on it if you have any.

Keep Learning and Enjoy Learning!!!

Monday, September 7, 2015

SQL Server 2012 Clustering - Part 2

Dear Friends,

As you have seen how to configure Windows Clustering, now let's proceed to the next step of SQL Server 2012 Clustering which is Configuration of Quorum.

For configuration of Quorum following are the two approaches:

Method 1: Failover Cluster Manager GUI

Method 2: Windows Power Shell

Let's have a look how to configure Quorum by these two method:

Method 1: Through Failover Cluster Manager GUI

Below are the snapshot with description for the configuration of Quorum:

1. Open the FOCM (Failover Cluster Manager). Towards the extreme right you can see "More Actions" Click that then click "Configure Cluster Quorum Settings...". You can see these steps in the below snapshot.



2. Once you are in the page of cluster configuration you can refer the below snapshot in clockwise direction.

i. Check the radio button "Select the Quorum Witness".
ii. Next check "Configure a file share witness".
iii. Pass the location (path name), here it will create the binary for Quorum.
iv. Click next and check all the settings which you have configured.


3. Click Finish and view the report. You have successfully configured the Quorum.


Method 2: Through Windows Power Shell

Configuring Quorum through Windows Power Shell is simple. The only thing we need to run is a one liner command as you can see in the below snapshot.



  • Open Windows Power Shell from the Task bar or open run and type "powershell".
  • Run the below command:
  • Set-ClusterQuorum -NodeAndFileShareMajority \\abc\sqlserver\

Till now we have seen how to Configure Windows Cluster and Configuration of Quorum. So both these are the pre requisite for configuration of SQL Server Cluster as well as SQL Server AlwaysON. In the Part 3 of the post we will see how to configure SQL Server Cluster.

Refer to the link of msdn here for more knowledge on Quorum setting.

Click here to refer Part 1 and Part 3 of this topic.

Thank you guys for reading the post, click here to read my first post on this blog. Do comment on the post.

Keep Learning and Happy Learning!!!

Friday, September 4, 2015

Dropping a SQL Server Database

Hi Guys,

Last post I have shared an Error that I was unable to start my SQL Server Database Engine. Today here I will share a small finding which I thought of worth sharing it.

In one of my migration project what I observed was "Even after dropping a Database the Data files and Log files were still exist". Ideally this should not happen because if we drop a Database the data file and log file associated with it should be dropped as well. But lets see what happened exactly that the database files are not dropped.

So what are the reasons for this?

Following are various ways to remove or Offline the Database from the SQL Server Database Engine:

a. Drop a Database
b. Detach \ Attach
c. Offline \ Online

General syntax for dropping a Database is: Drop Database Test

Let's see the scenario what exactly happened:

For the production migration what we were suppose to do was, remove then Test migrated Database and the replace it with the new Production migration Database. Following is the sequence which I followed:

Current Production Server : SQLServer-Current
New Production Server : SQLServer-New
Database Name : Test
  1. The test migration for the Database "Test" was done on "SQLServer-New" from "SQLServer-Current" Server.
  2. After proper testing on "SQLServer-New" the Databases was put Offline.
  3. On the day of actual Production Migration, we dropped the database so that we can create a fresh copy of the Database.
  4. Once it was dropped we started restoring the Database on the location were we have kept the data and log file.
  5. It popped out an error message that the mdf and ldf files are exist. Either I should rename by new file name or I need to change  the path.
  6. Since I cannot change the Database path, I was forced to change the file names.
  7. After that I was able to restore the Database.
  8. But I was wondering how can the files are still exist if I have dropped the Database. So once the migration activity was completed successfully I started troubleshooting it and here I found something new to learn.
"If you put the Database in Offline state and then drop a Database; you will always find the file associated with the Database present in the defined location"

Share you comments or any queries below to discuss more.

Tuesday, August 25, 2015

Error 1814 - Could not Start SQL Server Services

Hey Guys,

Some of you might be very well familiar with the below error. Let's see under what circumstances I've faced this error.

Few months back while I was configuring "AlwaysON" on one of the Server, something went wrong and due to which I've to completely decommission the AlwaysON configuration along with Windows Clustering.

After doing this, I restated the server and when the server was up what I saw was the SQL Server service was disabled, tried starting the same through Configuration Manager but hard luck I was facing the below error message.

Error 1814
Error 1814
Let's have a look below into the SQL Server Error log what its was it looking like:

Error Log
Error Log
With these error logs we started troubleshooting:
  1. The error in the SQL Error Log states that due to insufficient disk space "TempDB" was not created.
  2. Now what we were wondering about the error, as it was a new Server and disk was having sufficient space. So insufficient space was out of question.
  3. Just to cross verify I open the My Computer tab and what I saw was only C: drive was reflecting and rest of the drive were not visible.
  4. While installation I have kept the TempDB in D: drive.
  5. Now here comes the twist, let's recollect the above activity of AlwaysON which I was performing. But for some reason I have to decommission it.
  6. During decommissioning I've put the disks into offline state and due to which the disk were not accessible.
  7. So at start of SQL Server if the TempDB is not created, the SQL Server Services will not start.
Brief note on System Databases:

  1. Master, Model, MSDB and TempDB are system databases.
  2. Files of the Master Database is used in start up parameter during the SQL Server instance starts. 
  3. So if the Master Database is corrupted SQL Server instance will not start. 
  4. Also as it's one of the important properties is it stores the location information for other Database. Therefore Master Database is said to be the heart of SQL Server.
  5. Model Database acts as a template for the other User Database as well as TempDB while creation.
  6. So even if Model Database is corrupted SQL Server instances will not start. As indirectly TempDB will not start and if TempDB will not start SQL Server instance will not start.  
So guys System Database plays an very important role in proper functioning of SQL Server Instance.

Here my question is: What if MSDB is corrupted, will the SQL Server Instance will start? You can comment below.

Thank You Guys.
Keep Learning and Enjoy Learning!!!

Wednesday, August 12, 2015

SQL Server 2012 Clustering - Part 1

Dear Friends,

Recently I've configured Clustering as a Disaster Recovery (DR) and High Availability (HA) solution for one of my project. You might find many blogs on this topic explaining the concept and how to configure SQL Clustering.

Here I will try to explain the same in my terms and keep it as much as simple. And yes it will be from starch i.e. We will first see the how to configure Windows Cluster and then on top of it will configure SQL Server Clustering. It will be a multi part series.

Brief on Clustering:  

Cluster can be defined as when two independent machine come into existence with each other by sharing the disks in common. This is done because, just in case of any disaster on physical machine, it will automatically failover the disks to another machine as the disks are shared between the Servers. This will recover the Database in disaster. So this solution can be used as Disaster Recovery (DR) solution along with High Availability (HA).

I. Step for configuration of Windows Clustering Using GUI:

1. To start with the Windows Clustering; it is mandatory to have the Shared Storage between the Servers (which should be setup by the SAN Administrator).
To verify these disks open Server Manager and verify if the Shared Disks are visible on both the Servers like you can see in the below snapshot.

Server Manager GUI
Server Manager GUI


 2. Now we have to add "Failover Cluster" Roles on the Server. Open "Server Manager" then click "Add Roles And Feature".

Below snapshot means:

   i. Select Roles-based option.
   ii. Select the appropriate Server name.
   iii. Click Next
   iv. Check "Failover Cluster"
    v. Click "Add Feature" then click install



3. Once the roles is added, go to Service manager and then, Tools and then, Failover Cluster Manager. Open Failover Cluster Manager GUI.


4. As shown in the below snapshot, Go to extreme Right and click Validate Configuration.


5. Validate Configuration step will basically check Storage, Network, etc. One need to check the recommended setting and validate the configuration.
The Disk will go offline and come back online; if Disk Validation is checked. (Disk validation is basically unchecked when the Cluster is in production.)

Below snapshot means:

   i. Enter the both the Server Names.
   ii. Run the recommended setting.
   iii. You can view the progress
   iv. View the error if any or click finish.


6. Once it is successfully validated, you need to mention a Windows Cluster Name and an IP Address for that Windows Cluster as you can see in the below snapshot. Once you have successfully completed click Finish.



You can cross verify by cmd command and ping the Windows Cluster name or the IP Address provided above in point 5.

II. Step for configuration of Windows Clustering Using Power Shell:

1. We need to first install the Cluster Failover Management Tools:

Following commands can be run from the Power Shell:

a. Get-WindowsFeature Failover*

b. Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools --This command will install the Windows Feature Failover tools on the local Server but if you want to install the same on remote Server as well we have to run the following command:

c. Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools -ComputerName XYZ


2. Till now you have installed Failover feature on the Servers, next you have to configure Windows Cluster on these Servers:

Following commands can be run from the Power Shell:

d. New-Cluster -name SQL_Cluster -Node ABC, XYZ -StaticAddress xxx.xx.xx.xxx


where SQL_Cluster is Windows Cluster Name
ABC and XYZ are the Server Names between which Cluster needs to be installed
xxx.xx.xx.xxx  is the static IP Address for Windows Cluster.

So you have seen how simple is to build Windows Clustering through GUI as well as Poser Shell. This is not just limited as a pre- requisite to SQL Server Clustering but also for Configuring AlwaysON.

Here's the end of Part 1 of SQL Server 2012 Clustering. We have seen how to configure the Windows Server Clustering which is the first step towards configuring SQL Server Clustering.

Now next will be configuring the Quorum for the Cluster.

Click here to refer Part 2 and Part 3 of this topic.

Thank you for reading this post. So stay tuned and share your comments too.

Keep Learning and Happy Learning!!!

Thursday, August 6, 2015

New Features in SQL Server 2016 - Part 2

Good Morning Guys,

As we have seen the top 3 new features in SQL Server 2016, here is the continuation of that post. We will see few other enhancements in it.

Following are the list extending to these top 3 features:

4. Enhancements to AlwaysOn:

As everyone knows the concept of AlwaysON was introduced by Microsoft in SQL Server 2012. This was introduced due to many limitations in the previous versions of SQL Server on High Availability and Disaster Recovery AlwaysON was introduced for mission critical Enterprise Application.

It was released for only Enterprise Edition but in SQL Server 2016 it will be released in Standard Edition of course with limited functionality. AlwaysON has also seen enhancement in SQL Server 2014 as well.

There is 1 Primary replica, 8 Secondary replicas and upto 3 replicas can be synchronized compared to 2 secondary replica can be synchronized in 2014.

Distributed transaction or MSDTC was not supported between Databases on AG in 2012 and 2014; which is now supported in SQL Server 2016.

We can now perform Full, File, File group or Log backups on the secondary replica Databases, but can't perform Differential Backup.

For more detail please refer to this blog on AlwaysON.

5. Native JSON Support:

JSON stands for JavaScript Object Notation. This is mainly provided for the developers, who will use this function for organizing the data in more logical manner. It is much more similar to XML already available by SQL Server.

It format the SQL result set as JSON by adding FOR JSON clause in the From statement. It is very much similar to XML in terms of types as well:
  • RAW
  • AUTO
  • Explicit
Following is the syntax for using the JSON:

SELECT column, expression, column as alias
 FROM table1, table2, table3
 FOR JSON [AUTO | PATH]

Figure 1 shows the demo how the JSON script be written from a SQL Table:

JSON Code
Figure 1
Please refer the msdn blog here for more detail on JSON.


6. Enhancement In-Memory OLTP:

The feature of In-memory OLTP was introduced in SQL Server 2014 with the name "Hekaton" which is derived from a Greek which means 'one hundred' i.e. the goal of the project was to see 100x performance appraisal.

It has a separate query processor engine which is lock free design but at the same time it also maintain the data transnational integrity. Due to which it is different from Pin Table in SQL Server 6.5 release or putting Databases in SSD's (Solid State Disks). Refer this link to get an idea about the architecture of In-memory OLTP.

When it has done the debut it came with lots of drawback. And now from these demerits many has been converted to merit. lets see some of them below:

    i.  The max table size 256 GB which has changed to 2 TB.

    ii. TDE feature which is the greatest security feature in SQL Server was not supported in 2014 version, but now in 2016 it is supported.

    iii. Another major drawback clause such as IN, EXISTS, DISTINCT,OR,NOT,OUTER JOIN was not supported for query. Now these clauses are supported in SQL Server 2016.

    iv. LOB data type such as Varbinary(MAX) or Varchar(MAX) were not supported by the released version but from SQL Server 2016 it is supported.

    v. Similar way Foreign Key was not supported in older version which is not supported in SQL Server 2016.

There are many other new properties which has been enhanced in the new product of SQL Server 2016 for In-Memory OLTP check this msdn link for more detail.

These were the few new and some enhancement done in SQL Serve 2016. Check out this link which has some series of SQL unplugged videos for more details. So guys get ready to work with these exciting new features of SQL Server 2016 for Administrators.

Also along with features for DBA many more are waiting for BI and Developers guys as well.

Thursday, July 16, 2015

Msg 22049-Error executing extended stored procedure: Invalid Parameter

Hi Guys,

It's time to learn from the error which I went today. If you remember I have posted here a backup script. The purpose of the script is to take Database backup along with few options. The script has undergone through few changes now.

But here I want to discuss an issue which I faced during implementing this code on one of new environment.

Scenario:

  1. We wanted to configure this Database Backup Script in a new Server. There where few Databases on the Server. What we noticed was one of the Database name in the that Server was having 60 character (It was an Share Point Application Database).
  2. And due to that we were getting this (Msg 8152, Level 16, State 2, Line 1 String or binary data would be truncated) error message.
  3. For sake, we had changed the variable to Nvarchar(Max).
  4. After changing this variable there was another error popped up.Error was (Msg 22049, Level 15, State 0, Line 0 Error executing extended stored procedure: Invalid Parameter).
  5. Please see the Figure 1 snapshot.
  6. But in between Step 3 and 4; I was executing the SP from a Job and was unable to see this error message. As some of the error message were truncated. So I ran the SP manually from SSMS and I got to know about this error.
  7. Looking into the error, I came to know there was a problem in my last portion of the code.That means First portion 'To take DB Backup' was running fine and the Second part 'To delete the old Backup' was creating issue as I was using Extended Stored Procedure (i.e dbo.xp_delete_file)
  8. Not sure what was the issue after doing little research found an partial solution and then tested the code with some changes in variable and it was successful.
  9. Please see the demo code in Figure 2.  

Msg 22049-Error executing extended stored procedure: Invalid Parameter
Figure 1
Figure 2
Can you find any difference in Figure 1 and Figure 2?? (Of course apart from the Error)

The solution for the error was @path variable in the above code has datatype Nvarchar(Max) and if that is called from the extended Stored Procedure like in our case from dbo.xp_delete_file it throws the above error.

But if we pass the datatype as Nvarchar() like in Figure 2 and passed to Extended Stored Procedure the error is solved. 

So do remember if we are passing any variable to Extended Stored Procedure do defined a fixed value to the variable.

Keep Learning and Enjoy Learning!!!

Sunday, July 12, 2015

New Features in SQL Server 2016 - Part 1

Hello Friends,

I've given a short introduction to you on SQL Server 2016 here now, it's time to learn some of the much awaited new features for DBA's. You can consider this a qualified interview question as well.
I'll try to explain these features one by one here:

1. Stretch Database:  

This is the most interesting feature introduced by Microsoft in SQL 2016. The word "Stretch" means expand, which means you can expand your local (on-premise) Database to Azure (Cloud). So here we will see the combination of on-premise + Azure in one Database.
Stretch Database
Stretch Database

By default, this feature is disabled. We need to enable through SP_Configure 'Remote Data Archive'

Once it is enabled, you can configure this option under; 
Right Click on Database; Next Tasks; Next 'Enable Database for Stretch...

You need to supply the Azure Credential and select the table which you want to move.  

The whole purpose of Stretch Database is, we can move the old historic data to Azure by remote query processing. 

No changes is required from Application side. If a query is executed from an Application it will hit the on-premise Database and this on-premise Database will fetch the data from Azure and return the result set. (Only if that particular table data is moved to Azure)

Backup and Restore procedure of these Database would be little different from regular one. If you follow the regular 'Full' backup it will only consider the on-premise Database.

This would be consider as a good option to your environment because you can save the Hardware cost, Backup time would be less, Since historic data are moved out query processing for the current data will be improved. Also most important it's not a pre-requisite to have knowledge on SQL Azure  so any DBA who din't had luck to worked on SQL Azure should not worry.

There would be little Performance impact because it will fetch the data from on-premise as well as Azure. This will be the major drawback of this feature.

This feature must be only available in Enterprise Edition (Not sure yet). 

This was just a brief note about this Database. More information about Stretch Database are available here on msdn. You can also refer this site, it provides each and every details with GUI about Stretch Database.

2. Always Encrypted

Moving ahead with this interesting feature of Stretch Database, you might be definitely having question about data security, when there is any data movement from on-premise to Azure or vice-versa.

With this let me introduce to the next exciting feature of SQL 2016 which is Always Encrypted.
You might be well aware of TDE (Transparent Data Encryption) which was introduced by SQL Server 2008 in Enterprise Edition. It's use is to encrypt the data at rest with the help of certain keys (Master Key and Certificate). But as the data in Database is encrypted at the rest, the data is very much transparent; when the data is on fly (i.e. when data is moved from Application to Database or vice-versa). So man in middle attack is easily possible.This was the main drawback for TDE.

To overcome this drawback Microsoft has introduced new or can say extended security feature of TDE i.e "Always Encrypted" feature in SQL Server 2016.

Always Encrypted
Always Encrypted
Basically there is an enhancement in ADO.NET library which will protect the data in rest as well as motion.

So basically, the keys and certificate which we have create in SQL Server has to be deployed during application creation. This deployment of the Keys and Certificated will be handled by the ADO.NET library.

The figure on right will give an fair idea about the same. The data during fly will be encrypted and decryption will take place on the client side with the help of keys present in ADO.NET library.

This was just an overview on Always Encrypted, you can find more details here in msdn blog which is explained in depth with GUI. This is a very important feature for any organisation which handles critical data in there Database w.r.t to Auditing and Compliance. 

3. PloyBase: 

This one is the most important and exciting feature Microsoft has introduced ever and also the unexpected. Before this version any RDBMS product has the ability to interact with any other RDBMS (Through Linked Server).
But for the first time in SQL Server history it will interact with Non-RDBMS product as well. If you have gone through my previous post here, I already introduced to you about this Non-RDBMS product i.e. Hadoop.  

PloyBase
PloyBase
Looking into the future of Database world, Microsoft has also considered to introduce this feature in there upcoming product.
Earlier Apache Sqoop (It is a tool for data transfer between Hadoop Cluster and Relational Database) has the ability to interact with other RDBMS application but it has few limitations.
These limitations can be overcome with this new feature in SQL Server 2016.

If you have both SQL Server and Hadoop Cluster in your environment now it will not be necessary to learn Hadoop query to fetch data it can be easily just like Linked Server in SQL Server 2016.

How it will work?

  • Microsoft has introduced two new SQL Services which will be installed during installation of SQL Server. (Uncheck the box if there is no need of this Service)
  • We need to start by configuring SP_Configure parameter 'Hadoop Connectivity',0; by passing an appropriate value.
  • You can use the following configuration values:

0no Hadoop connectivity (default)
3Enable connectivity to Cloudera CDH for Linux
4Enable connectivity to Hortonworks Data Platform for Windows Server (HDP), HDInsight on Analytics Platform System, or HDInsight’s Microsoft Azure blob storage
5Enable connectivity to Hortonworks Data Platform (HDP)for Linux

Kindly refer to this link you will get a complete details about this feature.

These are just an overview of three new feature which is introduced by Microsoft in there new version of SQL Server 2016. We will see the another few features in our next post.

Till then stay tuned for more updates and feel free to share your comments here. Also do like and share if you liked the post.

Check this link for Part 2 of this post.

Keep Learning and Enjoy Learning!!!