LessThanDot Site Logo

LessThanDot

A decade of helpful technical content

This is an archive of the posts published to LessThanDot from 2008 to 2018, over a decade of useful content. While we're no longer adding new content, we still receive a lot of visitors and wanted to make sure the content didn't disappear forever.

Checking for NULL values in all columns that allow NULLS in all the tables

This post is based on a question I answered earlier today, someone wanted to check all columns that allow NULL for NULL values in all tables. The reason people might want to do this check is that they want to make all the columns not nullable in a database (after all we all know that developers hate NULLs). The Stored Procedure below is based on the code that George Mastros wrote for the following blog post: Search all columns in all the tables in a database for a specific value

Read More...

SQL Server Forums – Helping to build a knowledge base

Welcome to another day of the Community Service week for SQL University. Jes Borland (Blog | Twitter) wrote an excellent class yesterday on the SQL Community and areas that help connect us all. Today we’re going to move on into the service parts of the SQL community and show how we can keep building the community. I’d like to thank Jes again for helping author todays topic on technical forums. As with all SQL University weeks, please take a moment to rate us by filling out the SQL University Course Evaluation. We greatly appreciate your feedback so we can all become better.

Read More...

Review: NHibernate 2 – Beginner’s Guide

The people at Packt publishing asked me to review NHibernate 2: Beginner’s Guide by Aaron B. Cure. This book is for anyone who uses NHibernate. Who this book is written for This book is for new and seasoned developers of .NET web or desktop applications who want a better way to access database data. It is a basic introduction to NHibernate, with enough information to get a solid foundation in using NHibernate. Some advanced concepts are presented where appropriate to enhance functionality or in situations where they are commonly used.

Read More...

SQL Server and the Auto Close Setting

Today I woke up to a little over a hundred emails from one of my database servers letting me know that my resources were jumping around like a kangaroo. Actually, more like a boxing match with one… In the mix of those emails I also had alerts thrown stating, [database_name_withheld] has a status of Suspect, Cleanly Shutdown I actively monitor the state of the database being open or closed (which also shows status of suspect, recovering etc…) I recommend the same so you catch these situations. When I read the emails, I knew exactly what the setting was that had been set to true. “Auto Close”

Read More...

SQL Server and High Availability

Welcome to the last class of HA / DR week for SQL University. It has been a great week discussing these topics with all of you. We [recapped][1] those classes in order to highlight the key points over the week yesterday. So far we’ve covered a great deal but really have only scratched the surface of SQL Server features for HA and DR. Today will be another scratch in the surface regarding the High Availability points for SQL Server. Be sure to check the resources links through this article. They will greatly add an extension to today and further build your knowledge of the vast amount of abilities we have at our disposal.

Read More...

How to clean a table from badly named column names

Someone gives you a backup of a database, you restore it and the first thing you notice is that the column names have percent signs and underscores in them. It would be easy to fix this if it was one table but in this case there are hundreds of tables. The solution is to loop over information_schema.columns, find all the columns that have those characters and then rename those columns by using the sp_rename procedure. I will show you two ways to do this, one way that executes the code and one way that generates a script that you then can execute.

Read More...

SQL University – Recess time!

The recess bell just rang for SQL University HA / DR classrooms. While all of the SQL kiddies are running around the playground and playing with the things they have learned over this semester, the chalkboard is going to get a workout. Image courtesy of Bart Simpson Chalkboard Generator and linkback to Jeff Smith excellent article Over the last week we’ve gone over a lot regarding HA and DR. The first day we defined situations and the key factors that are needed to be successful in obtaining secure data services and high availability of those data services. Any of these two strategies to protect our data against disasters, local and remote; always start with the definitions required to plan out the implementation. We learned together that just throwing things like log shipping into the mix may not truly give us the protection we need. This would happen if we leave important business entities out of our planning and document how our systems would come back from disasters.

Read More...

Using Multiple jQuery UI Sliders on a Single Page

I had a bit of frustration lately when dealing with jQuery UI’s slider widget, more specifically when trying to deal with many on the same page. I wanted to follow the same pattern shown in the example for snap to increments where I have a div for the slider, and the value is actually set to a text input for easy form submission. The individual elements look something like this: <div class="SliderControl"> <label for="clarity">Clarity:</label> <input type="text" class="SliderText" readonly="readonly" id="clarity" name="clarity"/> <div class="RatingSlider" id="claritySlider"></div> </div> Basically there’s an input, and a single slider per input. The slider’s name is {input name}Slider. I had some trouble reconciling this setup with the way that the example on the jQuery site is laid out:

Read More...

Log Shipping for cheap DR

Welcome to day three of HA and DR week of SQL University. Today we are going to look at cheap DR. Yes, setting up DR can be inexpensive. The best part of this strategy is it comes along with most of the editions of SQL Server. The method is Log Shipping. Log shipping (LS) has a bad name in the Disaster / Recovery (DR) world. There are concerns with the ability to fail back to primary sites in the case of disasters, and LS is often thought of as a maintenance intense setup along with file mess. Today’s class will go over some methods to handle these and other concerns, along with the simplicity of configuring and monitoring LS in SQL Server 2008 (R2).

Read More...

T-SQL Tuesday – Give me my cores SSIS 2008!

I'm jumping into the T-SQL Tuesday fun this week. The very well known, Jorge Segarra ([blog][1] | [twitter][2]) is [hosting the fun][3] this time around. It is a busy week at that with SQL University writing and everything going on in the SQL Community. The SQL Server 2008 (R2) hottest, most favorite new feature topic had me wanting to throw SSIS out there once more and show off the Data Flow Engine changes. OK, the Data Flow Engine isn't a, "New Feature" but given the redesign, I'm throwing it into the mix with the rest. This was a big, big and did I say BIG change in SSIS 2008. Being a performance freak, the changes to the Data Flow and the effective use of multi-core processors was what had me the most excited. See, with SSIS 2005, Data Flow was sent off on its merry way running execution trees with only one lonely execution thread. That meant one thread! This doesn't help us much in a core happy world we live in. So in SSIS 2008 we had a big change to this architecture. Did I mention this was big? ### **Off to the races we go with Pipeline Parallelism**What SSIS 2008 has given us over SSIS 2005 and the execution tree is the ability to run more than one component from the single tree. This is really a huge change. Before this change and automated thread scheduling in SSIS, we had to try making our own with designing methods. In some cases the changes in designs caused other performance problems themselves. Now with a thread pool, threads are assigned dynamically (yes, auto-magically) to components. Thread pooling is not a new concept to the computing world (and .NET framework). Thread pooling manages where and what work a set of threads will work on. When work is thrown at the pool, the work is sent off to threads that can work on it. This means multiple threads working on multiple jobs in parallel execution. Thread pooling can be done manually but with SSIS and SQL Server, we like the concept of this being controlled without us causing problems. In SSIS 2008 we were given just that automatic scheduling.The question of the hour is, does this help us with performance? The performance shines from the massive tests that have been done already and can answer the question for us. If you haven't heard of the [ETL World Record][4], you need to get over there and check it out. 1 TB in 30 minutes!! Yes, on SSIS 2008 and the architecture behind it. The best way to check this out is to show the differences. Robert Sheldon also went over this test on the, [SSIS 2008 Crib sheet][5]. Very cool write-up and recommended reading it.### **The truth is in the pipe by logging it**Let's create a package that will run some data through so we can log the execution.> Note: the SSIS 2008 Crib sheet shows pretty much the same here. Robert did an awesome job explaining all of this. Highly recommend reading it.Steps to the test SSIS in 2005 and 2008 so we can see the execution changes and speed differences 1. In BIDS 2005, create a new package named Pipes 2005 2. Bring over an OLE DB Source and connect this source to AdventureWorks. 3. Bring in the table, Sales.SalesOrderDetail 4. Drop a Conditional Split into the Data Flow tab 5. Make the condition based on OrderQty being greater than one. 6. Next, drag and drop two Derived Column's 7. Make one Derived column the Case 1 output by connecting the first Path to it and selecting Case 1 output 8. Make the Expression (DT\_STR,3,1252)ProductID + " for " + (DT\_STR,3,1252)OrderQty 9. Change the Data Type to DT_STR (non-unicode string) 10. Do this for the next Derived Column by connecting the remaining output path to it. 11. Name the column Under2orders and leave the Expression the same 12. Connect both Derived Columns to two unique SQL Server Destinations 13. In the SQL Server Destinations, use the same connection of AdventureWorks. Click, New to create a new table. Use the following CREATE TABLE statement: sql CREATE TABLE [Playing_1] ( OrderOver1Unit varchar(10) ) ``` 14. Use this for the second destination but name the table, [Playing_2]Our finished product should appear like below 15. Next, click the menu option SSIS and select Logging 16. Select package in the Containers and add a new SSIS log provider for Text files. 17. Add a path to the new log of C:ExecutionTreeSSIS_Test2005 18. Highlight Data Flow Task to move into logging the task. 19. Click Details in the right tabs and click the Advanced button 20. Scroll down a bit and check PipelineExecutionTree. 21. Uncheck everything except MessageText 22. Click OK until out of the editors and save the package. 23. Run the package from BIDS And we have our data split and loaded Execution time was 936 MillisecondsSSIS 2005 PipelineExecutionTrees log> begin execution tree 0 output “OLE DB Source Output” (11)

Read More...