One of the projects I have been working on is a system for managing content on our network of websites. One of our requirements is that changes don’t take effect immediately, but on a separate preview network where our customer can look to see that her changes show the way she expected before pushing them into the production environment. Because of this requirement, we need to maintain 2 separate sets of product images (hosted on Amazon S3, with their CloudFront CDN used for the production sites).
This is an archive of the posts published to LessThanDot from 2008 to 2018, over a decade of useful content. While we're no longer adding new content, we still receive a lot of visitors and wanted to make sure the content didn't disappear forever.
Axel posted a post titled Catching the OUTPUT of your DML statements earlier today. I posted something about the OUPUT clause myself as part of my SQL Advent 2011 series here: SQL Advent 2011 Day 11: DML statements with the OUTPUT clause Reading Axel’s post where he describes how he can use this instead of a trigger got me thinking about how @@identity and scope_identity() would behave. You all know that when you have an insert trigger that inserts into a table with an identity column that @@identity will give you the value of the table that the trigger inserted into, scope_identity() will give you the id of the table that the trigger fired from.
Suppose you need to log the old and new version of the data you change in a table in your database. If you ask a DBA how this could be done, I guess 80% will tell you to do it with an after trigger (the number is going down because every new edition of SQL Server comes with new features to do this). If you ask a DBA what he thinks of triggers, 95% will tell you to avoid them as much as possible… So what should you do? Well it all depends on the requirements you have and how you’re data is saved. If you need to be sure that all changes are logged, also the direct changes not coming from a business application you’d better be looking at triggers and/or audits. If you just want to log from within your application you can consume the OUTPUT directly from your INSERT, UPDATE, DELETE or MERGE statement. Let’s see how it works. First of all gather some data to work with; I’ll use some data from the AdventureWorks database:
I decided to give Duplicati a try to backup some of my stuff to the cloud. Duplicati is free/libre/open-source software (FLOSS). Duplicati works with Amazon S3, Windows Live SkyDrive, Google Drive (Google Docs), Rackspace Cloud Files or WebDAV, SSH, FTP. Duplicati uses Pre-Internet Encryption, you know that nobody can decrypt your files. When working with the cloud you have to be in TNO (Trust No One) mode. You never know what malicious person could be on the other side. Duplicati has built-in AES-256 encryption and backups can be signed using GNU Privacy Guard
[RavenHQ][1] is a cloudy solution for [RavenDB][2]. And it has a free option to try it out. So I did. First you create an account and then you add a database. To make a database you just fill in the database name and and click on the red add button for the bronze plan. And when you now go to databases you will see this. Before we go on with our coding we need to know our connection details. These can be found when you click on the manage button for your database and then connectionstrings.
A great use of Idera’s Diagnostic Manager is to manage your SQL Job’s and knowing which jobs have failed, which ones are running long and which ones have been running for the last 10 hours. You even get to adjust the thresholds for each all of those metrics. If a job fails, you can don’t have check the history of the job in SSMS to see why it failed. Instead you can do it right from Diagnostic Manager. From SQL Agent Jobs in the Services section, find a click on the job you want to know the history of in the top section.
** ** In another life, I’m a professional bookworm who gets to read and review books full-time. I'd like to live here, please. My latest SQL read was SQL Server MVP Deep Dives. It’s a compilation – 59 chapters – on various SQL Server topics, written by Microsoft MVPs. What’s so cool about this book, as opposed to most, is that it plays to an author’s strengths and passions. It doesn’t cover one topic; it covers every aspect of SQL Server.
There’s been a lot of buzz about the cloud over the past years, with a lot of that attention going to IaaS and SaaS platforms, but there’s a revolution (or re-revolution) that is of even more importance, and that’s PaaS. What PaaS brings us is the ability to scale horizontally and treat CPU, memory, and storage as pools of resources that are as deep as our checkbooks allow. Forget about virtual servers. Remember that 60 hour job with a 24 hour deadline? Built on a PaaS platform and equipped with a couple hundred dollars, you won’t even be staying late today.
In response of my previous post about being sure you sign up for the correct training, I got the question where you could find training for more experienced DBA’s, for a reasonable price and not involving self-study. As I’m living in Belgium, I only can answer this question for my region. But I think you can use the same approach for other countries and regions. First of all it’s true, the basic implementing and maintaining courses are so popular that they are regularly scheduled. When you browse the Microsoft Training Catalog you’ll find some other available trainings, it’s possible to call a training center and ask to schedule the training but keep in mind they need at least 2-3 students before they will start the training.
There was a question today How to change my local sql server sa password? i would like to expand on my answer in this post Before I start I would like you to read this post by ted Krueger first: To SA or not to SA to understand why you should not be using the SA account. Now that you know why you should not be using the SA account and you are still using it, let’s see how you can change the password for the SA account