There was an error in this gadget

Thursday, February 4, 2016

Edition Upgrade in SQL Server

Hi Friends,

Check here for my last post of year 2015 on Hadoop. By this, lets start with this post on "How to perform in-place Edition upgrade". This procedure will work well with SQL Server 2005 and above.

Requirement:

There was a requirement from the client to upgrade the Edition from Enterprise to Standard for one of my SQL Server 2008 R2 instance.

So lets see how simple and time efficient is the in place upgradation:

Step 1: Launch the SQL Server setup file => Go to "Maintenance" tab on left hand side.



Step 2: Click "Edition Upgrade" link on Right hand side. This will check first check the rules.



(Next)



(Next)



Step 3: Very important step check the new edition here (Which I have highlighted below)



Step 4: Select the instance from the drop down for which the instance need to be upgraded.



(Next)



(Next)



Hold on for 3 Minitues and your in place edition upgradation is finished.

As you have seen how simple and time efficient it is to perform this, it comes with the cost. AND the cost is "if there is any error during the upgradation phase there is no procedure to rollback".

The only option left with us is complete uninstalltion and again installing the fresh setup. So before starting this procedure make sure you have taken the necessary precaution.

Have your ever faced this error.


Thursday, January 21, 2016

Error - Instant File Initialization Failed

Hi Friends,

Here comes the first post of the wonderful year 2016 ahead. Last year, we have ended with the post on Introduction to Hadoop. Lets start our first post with an Error in SQL Server.

Description:

One of the error which I faced very frequently now a days as you can see the below snapshot: "File initialization failed" because of which my Restoration activity was failed.

This error occurred when I was in the middle of a Migration activity, were I was suppose to Backup and Restore a Database from one Server to another one. After executing the Restoration command with stats=1, I was waiting for 1 percent to complete (after that I can have a nap because the backup file was huge and the activity was at mid night) but I was awaiting awaiting and awaiting for that 1 percent. It was getting suspicious because it should not take too long to complete even a percent.

So, I decided to stop the restoration. Once the session was stopped I found the below error message:

Instant File Initialization Failed

Instant File Initialization Failed
Basically, I will try to explain the behind the scene what exactly SQL Server does then we create or Restore a backup file in different post. In this post just lets look for the solution for the error.

Solution:

1. Run => Secpol.msc;

Open Local Security Policy => Local Policies => User Rights Assignment => "Perform Volume Maintenance Task" => Right click => Add the user through which SQL Server services are running. Like you see in the below snapshot.

2. No need to restart the Server.

3. Now, start the restoration process and this time it will work well.

Perform Volume Maintenance Task

Perform Volume Maintenance Task

So from now SQL Server will skip the Zero Initialization whenever we create or restore a Database. Later we will see what exactly does this mean in probably in different post.

Hope this will save your time and this will help you. Don't forget to drop a comment below. Also do vote below if it is Interesting, Informative or Boring.

Facing trouble while switching the Database from Single User Mode to Multi Mode check here for solution.

(It's been so long, more than couple of weeks I was away from my blog. I'm afraid this could further continue for few more, due to multiple projects on weekdays as well over the weekends. Due to this, my Blogging might also affected. So, stay tuned soon we will learn many things on SQL Server as well as Hadoop Administrator.)

Thanks,
Vikas B Sahu
Keep Learning and Enjoy Learning!!!

Tuesday, December 15, 2015

Introduction to Hadoop

Dear Friends,

Couple of months back I've published a post on SQL Server 2016 new features here.

Meanwhile, let me introduce you to Hadoop. We will learn this as a series of inter-related post. So don't miss any post in between and read it serially. Let's make it fun and interesting to learn Hadoop.

So, lets understand what are the formats of data that we handle in real word.
  • Flat File
  • Rows  and Columns
  • Images and Document
  • Audio and Video
  • XML Data and many more......
Big Data is an ocean of data which an organization stores. These data come in three V's i.e. Volume, Velocity and Variety.

Now-a-days huge Volume of data are getting generated by many sources such as Facebook,Whatsapp, E-commerce sites, etc, etc, etc. These huge volume of data are getting generated with high Velocity, can say it is multiplying every seconds, every minute, every hour. Along with the huge Volume and high Velocity numerous Variety of data is generated in different forms.

These data can be in any format i.e. structure, semi-structure as well as unstructured. Data stored in the form of row and column can be well defined as structured data whereas data in form of document, image, sms, video, audio,etc can be categories into unstructured and data in html or in XML format can be semi-structure data.

Q. I am sure you must be thinking that then how does a RDBMS handles these kind of unstructured or semi-structure data in there Database?

A. Well, to handle these kind of data's we have special data type such as Varbinary(Max), XML. Drawback of this is, if we are storing an image, it is stored in binary format within the database; whereas actual image is stored in Filestream or the Server itself. Hence there is an performance impact during storing and retrieving Petabyte of data's.

Moreover to this, Big Data is not just about maintaining and growing the data year on year, but it is also about how you manages these data's to make an informative decision. Data in Big Data can also comes in various complex format, to manage and process these type of data we need large set of cluster servers.
BIG DATA & HADOOP

With this introduction to Big Data, now let me introduce you to Hadoop. 

Hadoop is a large set of cluster servers which is built to process large set of data. It has two main core component i.e. 'Hadoop MapReduce' (Processing Part) and 'Hadoop Distributed File System' (Storage Part). Hadoop project comes under Apache and that is why it is called as 'Apache Hadoop'. The idea behind these two core component came into existence when Google has released there two white paper of there project on 'MapReduce' and 'Google File System (GFS)' in the year 2004.
Hadoop was created by Doug Cutting in 2005. Cutting, He was working at Yahoo! at the time he build the software, it was named after his son's toy elephant.

Wikipedia defines Hadoop as "an open-source software framework written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware"

Hadoop is an open source framework available in free as well as commercial use under Apache license. It allows distributing and processing of dataset across the large Cluster set. On top of this, there are lot more application build by other organisation who use Hadoop or continuously work on this product which all comes under 'Hadoop Ecosystem'. Check here to find the list of projects under Hadoop Ecosystem. As we move further we will see post on the important projects under Hadoop Ecosystem.

Apache Hadoop Architecture consist of following components:
  • Hadoop Common: It contains the libraries and other utilities needed by other module of Hadoop.
  • Hadoop Distributed File System (HDFS): It is cluster of Servers with commodity storage which is used for data storage across the cluster.
  • Hadoop YARN: This component is used for Job scheduling and Resource management in Cluster.
  • Hadoop MapReduce: The processing part of the data is done by this component. 
At least now, I can consider that you are having a fair enough idea about this technology. But how can or who is the best person to get into Hadoop??

Technical answer for that would be those who are interested to learn this technology can get in two ways:
  • As a Developer
  • As a Administrator
  1. As a Developer: Hadoop is a framework which is built in JAVA language. So having JAVA background can get easy access to become a Hadoop Developer. Since the growing popularity of Hadoop, now a days this is the most common designation you can find in Job sites. 
  2. As a Administrator: Most organisation with Hadoop installation prefer for a Part time or a
    Hadoop Administration

    Full time Administrator to manage there Hadoop Clusters. It is not compulsion that the admin should have the knowledge of JAVA to learn this technology. Indeed! they should have some basics for troubleshooting. Candidate those who are having knowledge with Database Admin (SQL Server, Oracle, etc) background who already have troubleshooting, Server maintenance, Disaster Recovery knowledge are preferred or anyone with Network or Storage or Server Admin (Windows\Linux) skills  can be the other best choice. Here in this post it is mentioned in detail who suits best for Hadoop.     
Following might be the questions in your mind if we want to get start with Hadoop Admin:

  1. Do we need any DBA skills? Of course Yes; If we need to Admin the Hadoop Cluster (Maintaining, Monitoring, Configuration, Troubleshooting,etc). 
  2. Do we need to learn Java? Yes; At least some basics to understand the Java errors while troubleshooting any issue.
  3. Do we need to understand Non-RDBMS? Yes; Hadoop understand both SQL and NoSQL (Not only SQL). So having knowledge on Non-RDBMS product is most important.
  4. Do we need to learn Linux too? Yes;  at least the basics.
In our next post we will see the concept of HDFS (Hadoop Distributed File Structure).

Interested in learning SQL Server Clustering check here. Stay tuned for many more updates...

Keep Learning and Enjoy Learning!!!

Monday, November 2, 2015

Error - While Putting Database from Single to Multi User Mode

Dear Friends,

Click here to check how to start the SQL Server without TempDB Database. Let's learn what to do or how simple it can be to change the Database Mode from Multi User to Single User Mode or vice-versa?

Indeed! it is simple with the following command:

a. Alter Database Out set Single_User 
b. Alter Database Out set Multi_User

If we don't mention any termination clause like above it will run until the statements get completed.

Suppose there are n numbers of users connected to the Database and you executed the above command it will take hell lot of time to complete.
So rather, you can force disconnect the users to put the Database in Single User mode you have to fire the below command:

Alter Database out set Single_user with Rollback After 30 -- After 30 Seconds it will cancel and Rollback the Query

Alter Database out set Single_user with Rollback  Immediate -- It will immediately cancel and Rollback the Query

Alter Database out set Single_user with No_Wait -- If  there is any incomplete transaction No_Wait will Error

But again it might get horror if the Databases is in Single User and you cannot access the Database because only one connection can be made at a time and just think that connection is taken by the system i.e. SQL Server.

In this situation you are locked out, reason you cannot access the Database. Like you can see in the snapshot the Database Out is used by the system i.e. it is used by the Background process.

If you try to bring back the Database again in Multi User mode system will throw the following error:


Another possibility you can try would be detaching the Database. But when I tried detaching the Database, SQL Server will first kill the connections to the Database. Rather I should frame it as SQL Server will kill only the User connect and not the system connection.

After loads of struggle we were back to square one, that our Database was not getting back to Multi User mode. Seems it was like a deadlock between System SPID with Out Database. So we enabled the trace flag and checked the Error Log file. So following snapshot confirms that there was an deadlock:

DBCC TRACEON (1204,1222,-1)

Deadlock Graph
Deadlock Graph
If we try to Alter the Database to put it in Multi User mode. We will get a deadlock and since our Alter Statement is having low priority it will fail with the error message:


After random tries we tried the following command and it saved us. We have to set the deadlock priority high and then execute the Multi User mode query like below:

Set Deadlock_Priority High
Go
Alter Database Out Set Multi_User

So what this will do is it will set the Dead Lock priority High and Alter the Database to Multi User mode.

Guys do share the feedback about this article and of course about the blog too.

Want to start learning SQL Server Clustering?? Check here the three part series on SQL Server Clustering.

Do you know MS SQL Server 2016 is ready to launch?? Check here the two part series on New Features of SQL Server 2016.

Keep Learning and Enjoy Learning!!!