In my previous post I outlined some of my own history with monitoring and my intent to review several available logging services. To help compare apples to apples, the logging mechanisms and log messages will operate consistently for each of the selected services. The sample code is available on github and will continue to be updated as I add services I am trying: tarwn/InstrumentationSampleCode
This is an archive of the posts published to LessThanDot from 2008 to 2018, over a decade of useful content. While we're no longer adding new content, we still receive a lot of visitors and wanted to make sure the content didn't disappear forever.
Introduction [Agent Mulder][1] is a plugin for [resharper][2], so you will need that. Agent Mulder plugin for ReSharper analyzes DI containers (Dependency Injection, sometimes called Inversion of Control, or IoC containers) in your solution, and provides navigation to and finding usages of types registered or resolved by those containers. You will clearly also need a DI container 😉 I will test it with Autofac and VB.Net (of course) and I used Agent mulder 1.0.4 for this.
This is going to be the start of a series on managing multiple SQL Instances. Up to now, I’ve been mostly writing about Idera’s Diagnostic Manager which is great for this purpose, but like anything else, multiple tools are needed. Do you ever find yourself deploying the same code to the same 10 SQL Instances one at a time and wish there was a better way? Do you keep track of what SQL Instances you have by writing them on Post-It Notes in your cube? Are your developers always asking you what the names of the SQL Instances are?
Monitoring [in IT] sucks and I am probably more critical of its state than most IT people. Over the next few posts I’m going to integrate a sample application with several logging and data services, evaluating them against my own needs and expectations. Besides the comparison, and perhaps more importantly, the examples will show how easy it is to instrument our applications and start getting visibility into what is actually happening behind the scenes.
I’m a huge fan of user groups. It’s why I helped found and am on the board of MADPASS, and why I’ll start another user group in northeast Wisconsin this fall. User groups give us training and learning opportunities, but with a face-to-face component no amount of online learning or book reading can match. They are all about networking, community, and learning. If you’re not attending a user group, you should be.
In SQL Server 2005 to 2012, row limits took on a slightly new limit expectation when variable length data types were used: varchar, nvarchar, varbinary, sql_variant or CLR. Essentially, this is done by the addition of a large object page: Row Overflow Pages or pages in the ROW_OVERFLOW_DATA allocation unit. The row overflow page type allows a row to exceed the 8060 byte row limitation by performing exactly what the name implies by extending the row into an overflow page. In order to accomplish this, a 24 byte pointer is retained on the original pages which still reside in the IN_ROW_DATA allocation unit. This is also the same when multiple row overflow pages are introduced. The row overflow pages can be another factor when indexing and reviewing existing execution plans. The end result should be to generate a good execution plan while bringing in as few pages as needed to fulfill the needs of the transaction.
Today I was “gifted” a Virtualbox VDI file with a setup I need to use to give some custom training. I tried Virtualbox once but it was not a success and also this time the machine failed to boot. It kept complaining about some video settings and the boot sequence was interrupted. Time to convert the VDI to a good old VMWare VMDK, but how? The first hit in Google showed me an easy way to do this on a Linux, the second showed a complex one with third party tools on Windows. So here’s the easy one on Windows:
As part of a long series of posts, I implemented a version of the MVC Music Store tutorial application on top of a pair of SQL Server CE databases. SQL Server CE is great for small apps, being a portable file-based database that can easily be moved to a full SQL Server instance. Last week I migrated my application to use full SQL Server instances instead of the SDF file and picked up a 3x performance improvement. It was interesting enough that I decided to share 🙂
With the introduction of the SQL Server PowerPivot for Excel and SharePoint in SQL Server 2008 R2, Microsoft gave us another analysis engine we could use with our data. While not embedded into the SQL Server stack at the time, it was clear that in-memory technologies are the next generation of analysis in Microsoft. xVelocity formerly known as Vertipaq When it was initially released, the in-memory data solution was called Vertipaq. With the release of SQL Server 2012, the Vertipaq engine was rebranded to xVelocity which includes all of Microsoft’s in-memory technologies. In the case of SQL Server and PowerPivot, Vertipaq is called “xVelocity in-memory analytics engine”, which I will refer to as xVelocity for the duration of this blog.
Yesterday, June 19th, the BDD transform for SSIS 2012 was released (also available for 2008). BDD is a product and creation from the SQLCAT team so we know that one thing was in mind: Performance! For me, this has been a long awaited upgrade to the transform. The purpose of the BDD transform is to take full advantage of multithreading architecture in SSIS. I’ve always been a big supporter of taking full advantage of the resources that are available in order to enhance performance to the highest possible limits. This was one reason the changes in SSIS 2008 and the data flow were so welcome. In order to truly develop high-end and Enterprise level ETL systems, we need to know what is exposed to the platform being used from hardware to software capable methods of running through data more efficiently, faster and with stability.