And it is time again for another “stupid me”! For those unfamiliar with the concept:
Every time I do something “stupid”, which happens from time to time, I’ll do a little blog post on what happened and how I solved it. The reason for this is twofold: I’ll have a solution online I can consult if it happens again and other people can benefit from my mistakes as well. Because remember the ancient Chinese proverb: “It’s only stupid if you don’t turn it into a learning experience”. Okay, I might have made that last one up…
The problem
Some time ago, I had developed an SSIS package which made a part of a much larger project. The package loaded some reference data needed in the data warehouse to a table. Nothing fancy, just a straight load. I had deployed the package to the development server, tested it there and after everything seemed OK, I deployed the package to the user acceptance environment (UAT).
I needed to run this package so an initial load of the table could be done. I didn’t need to run the entire ETL, just the package, so I logged into the SSIS service and I execute the package. I checked the table and … it was empty. Strange. I checked the log files and I saw the package indeed transferred the correct number of rows. Double strange. As I implement best practices in my SSIS packages, I have indirect package configuration set up. Basically this means I have an environment variable on each machine telling my SSIS packages where the package can find the server and database with the configuration tables. The package uses these configuration tables to configure variables, connection strings et cetera inside the package. So I checked the environment variable and the configurations on the UAT server. Everything was fine. Triple strange. Then I checked the table on the development server. Suddenly, the number of rows had doubled! This means I was updating the development server instead of the UAT server. (Quadruple strange?) So what happened?
The solution
The answer was right under my nose: I had log into the SSIS server of the UAT environment and I executed the package manually by right clicking it and selecting “Run Package”. What I forgot however is that this doesn’t mean the package is executed on the UAT server, but it is indeed executed on my local machine! Indeed, an SSIS package is not executed where it is saved, but on the machine that calls DTEXEC. In other words, if I have server A storing SSIS packages, but I have a SQL Server Agent job executing them on Server B, the SSIS packages are executed on Server B, not Server A. Take this into account when you want a dedicated SSIS server: you’ll need to install the SQL Server engine as well just to have SQL Server Agent.
The embarrassing part is I tell people about this a lot in the forums, but when it happened to me I completely forgot about it. I blame this on the lack of caffeine and sleep 🙂
Related articles:
Stupid me #1 – Locking myself out of SQL Server
edit: I forgot to mention it in the article, but it was my colleague Valentino who had a moment of clarity and pointed out my foolishness 🙂
4 Comments
Thanks for the mention! 🙂
There’s actually a better “solution” than logging on to the remote server. Just create a job on that server to run the package. Then you can use SSMS on your client PC to start the job and thus you’re running the package on the remote server.
Creating a job on an UAT server just for executing a package once does seem like a lot of work 🙂 (And it clutters the server.)
Clutter could indeed become an issue. Unless you call it “Koen’s Test Job” and rename your package to something like “KoenTestJob.dtsx” while deploying 🙂
This makes me think of something I wish they’d had implemented: the ability to group jobs together (just for UI purposes). That would really help to avoid clutter in the Job Activity Monitor and we’d be able to put jobs of the same project in their own folder…
Such functionality would be nice indeed. No-one is stopping you from logging it into Connect 🙂