Not too long ago I got a work order in that was requesting help with removing duplicate records out of a report. This report, and the database it was built upon, was built solely by a co-worker that left two years ago. This report just displayed a list of project numbers and some data related to them. I opened up the project table and voila! I found some duplicate projects. I noticed that the table had an identity field that was the primary key, but the project number didn’t have a unique index on it, leaving the table open to allow someone to put in the same project multiple times, either on purpose or accident.
This is an archive of the posts published to LessThanDot from 2008 to 2018, over a decade of useful content. While we're no longer adding new content, we still receive a lot of visitors and wanted to make sure the content didn't disappear forever.
It’s been nearly two years now since joining the Magenic team and it’s been a great ride so far. Well, the ride is about to get even better. Next week, Magenic is hosting a new event titled the BI Summit. This event is a focus point on the decision makers and the team players that hold key factual aspects in how a company attacks Business Intelligence. Magenic has long been a monster of a custom development consulting company with monster names at its disposal. The side that Magenic is now showing is the data services and business intelligence aspects to all its offerings. With the BI Summit, this is where Magenic will show off both, the skills capable of being offered as well as the newest involvement and technological advancements Microsoft has made in BI and SQL Server.
Several weeks ago I posted a three part series on the ability to use available online logging services for instrumenting applications. None of the services overwhelmed me, but there was an additional service I found that caters specifically to numeric metrics. Given that I’ve had a tab sitting open on the Librato site for the past several weeks, I thought it was time to actually do the review.
In a merge replicating world, laptops are typically the majority of the subscribing machines. With laptops and any other user controlled computer, changes can come up – upgrades are needed or new hardware rollouts. These all cause the need for a new or rebuilt laptop / desktop to be issued to users. Many times when the need for a rebuild or replacement happens and the machine was subscribing to a merge replication publication, the steps taken are to apply a new snapshot. This is typically needed once a DBA calls for the need to reinitialize the subscriber off a new snapshot. Although this method works, it is a massively time consuming process and isn’t’ needed.
I often run across databases that store files, images, and all sorts of large object data types, better known as LOBs. These databases will typically become problematic as storing these types of objects in a relational database has some performance problems with it. Some of those performance problems revolve around excessive server resource consumption and limited indexing abilities which make requesting or querying the data as painful as the performance of inserting data into them is. Although the concept of storing these objects in SQL Server is mostly a negative design to most administrators, it does happen and there are valid reasons, such as security and retention of the objects. For example, in some medical related systems, files can contain patient related information and fall under strict guidelines and security restrictions. Most administrators that have worked in a hospital or a pharmaceutical system that provides Rx products have run into these strict guidelines and security measures. Placing these files into a SQL Server database allows for a deeper security model as well an easily obtained recovery model and longer retention period. Now, the cost of disk for a database server is typically higher than a file-based system’s disk cost. That alone can outweigh file layers that can be used with third party tools in order to meet the same security and recovery needs. Weigh those options heavily when in a designing stage in which files are part of the overall design and needs.
A few weeks ago I looked at a project by Luke McGregor (blog|twitter) that benchmarks a variety of ORMs doing common operations at the 1 to 10,000 record scales. I was curious to see how the ORMs he had included would fare against common ADO methods and how those ADO methods would compare to one another. My Original Post: Evaluating ORMs for Batch Data Performance
Microsoft just released a press release announcing that Windows 8 has been released to manufacturing (RTM) Here are the important dates for us developers August 15th: Developers will be able to download the final version of Windows 8 via your MSDN subscriptions. August 15th: IT professionals testing Windows 8 in organizations will be able to access the final version of Windows 8 through your TechNet subscriptions. August 16th: Customers with existing Microsoft Software Assurance for Windows will be able to download Windows 8 Enterprise edition through the Volume License Service Center (VLSC), allowing you to test, pilot and begin adopting Windows 8 Enterprise within your organization.
My latest SQL Server read was Troubleshooting SQL Server – A Guide for the Accidental DBA, by Jonathan Kehayias (blog | twitter) and Ted Krueger (blog | twitter). This book is designed to teach people not familiar with the intricacies of SQL Server day-to-day troubleshooting steps and techniques. Topics that are covered include disk I/O configuration, CPU, memory, indexes, blocking, deadlocks, transaction logs, and – my favorite – accidents waiting to happen.
SO yesterday (on a Sunday) I asked you to look a this code. And guess what it did. Module Module1 Sub Main() Dim c1 As New class1 Console.WriteLine(c1.Name) c1.Name = "test1" test1(c1) Console.WriteLine(c1.Name) c1.Name = "test1" test2(c1) Console.WriteLine(c1.Name) Console.ReadLine() End Sub Public Sub test1(ByRef c1 As class1) c1 = New class1 End Sub Public Sub test2(ByVal c1 As class1) c1 = New class1 End Sub End Module Public Class class1 Public Property Name As String End Class``` And in essence you should now the difference between ByVal and ByRef for reference types. The result would be this. > test1 For ByRef the result is easy to understand since you are just passing the reference of that object to the method and the method than just goes on working with your object like it did in the passing method. The reference for the argument passed as ByVal is a copy of that Reference and does not allow you to change the value. So when you try to create a new object it will refuse that because you are trying to change the reference. Now try this piece of code. ```vbnet Module Module1 Sub Main() Dim c1 As New class1 Console.WriteLine(c1.Name) c1.Name = "test1" test1(c1) Console.WriteLine(c1.Name) c1.Name = "test1" test2(c1) Console.WriteLine(c1.Name) Console.ReadLine() End Sub Public Sub test1(ByRef c1 As class1) c1 = New class1 c1.Name = "test2" Console.WriteLine(c1.Name) End Sub Public Sub test2(ByVal c1 As class1) c1 = New class1 c1.Name = "test3" Console.WriteLine(c1.Name) End Sub End Module Public Class class1 Public Property Name As String End Class``` With this as the result. > test2 > test2 > test3 > test1 It shows that the c1 in the method test2 has become a different variable which is no longer associated with our passing parameter since they no longer have the same reference. However. ```vbnet Module Module1 Sub Main() Dim c1 As New class1 Console.WriteLine(c1.Name) c1.Name = "test1" test1(c1) Console.WriteLine(c1.Name) c1.Name = "test1" test2(c1) Console.WriteLine(c1.Name) Console.ReadLine() End Sub Public Sub test1(ByRef c1 As class1) c1.Name = "test2" End Sub Public Sub test2(ByVal c1 As class1) c1.Name = "test3" End Sub End Module Public Class class1 Public Property Name As String End Class``` Will give the following result. > test2 > test3 The c1 passed to test2 as Byval now stiil behaves the same because the underlying reference is still the same. So ByVal creates a copy of the reference but until the point you actually change that reference it will still point to the same object. This is all of course perfectly logical and all, but I’m sure it has caused more then a few bugs for people.
Module Module1 Sub Main() Dim c1 As New class1 Console.WriteLine(c1.Name) c1.Name = "test1" test1(c1) Console.WriteLine(c1.Name) c1.Name = "test1" test2(c1) Console.WriteLine(c1.Name) Console.ReadLine() End Sub Public Sub test1(ByRef c1 As class1) c1 = New class1 End Sub Public Sub test2(ByVal c1 As class1) c1 = New class1 End Sub End Module Public Class class1 Public Property Name As String End Class Before running the above try guessing the result. BTW it’s not a bug, it’s normal behavior.