Skip to main content

The outsourcing question

I run a web development business, and am always engaged in a question about how many of my supporting services should be contracted out or done myself. And for what I don't do myself, who I can trust to deliver that service reliably to my clients. And what to do when that service fails.

This is not an academic debate this week for me.

On Sunday, my server-hardware supplier failed me miserably. On Friday, I notified them of errors showing up in my log related to one of my disks (the one that held the database data and backup files). They diagnosed it as a controller issue and scheduled a replacement for Sunday early morning. So far so good. It took longer than they had expected, but it came back and seemed to check out on first report, so I thought we were done. It was Sunday morning and I wasn't going to dig too deep into what I thought was a responsible service providers' area of responsibility.

On Sunday evening, Karin (my business associate at Blackfly) called me at home (which she normally never does) to alert me that the server was failing. By that point, the disk was unreadable, so we scheduled a disk replacement and I resigned myself to using my offsite backups, which were now a day older than normal because the hardware replacement had run over the hour when the backup schedule runs normally (why didn't I manually run it after the hardware "upgrade"? yes).

That server has been much too successful of late, and loading all the data from my offsite server was much slower than I'd anticipated (i.e. 2 hours), and then running all the database restores took a while. To make it worse, I decided that it was a good opportunity to update my Mariadb (mysql) version from 5.2 to 5.5. That added unexpected extra stress and complications to it (beware the character configuration changes!!!!), which I can mostly only blame myself for, but at least I suffered for it correspondingly with lack of sleep.

But then on Monday, after sweeping up a bit, I discovered that not only had the hardware swap that was done on Sunday morning not addressed the problem and made it much harder by postponing what could have been a simple backup to the other disk, they actually swapped good hardware for older hardware of lesser capacity - in other words, the response to the problem had been to make it considerably worse. I had a few words with them, I'll give them an opportunity to come up with something before I shame them publicly.

Now it's Tuesday morning and and the one other major piece of infrastructure that I outsource (DNS/Registration, to is down, has been for the last hour.

In cases like this, my instinct is to circle the wagons and start hosting out of my basement (just kidding!) and run my own dns service (also kidding, though less so). On the other hand, the advantage of not being responsible is that it gives me time to write on my blog when they're messed up.

Conclusion: there are no easy answers to the outsourcing question. By nature, I take my responsibilities a little bit too close to heart, and have a corresponding outlook on what healthy 'growth' looks like. Finding a reliable partner is tough. It's what I try to be.

Update: here's an exchange with my server host after they asked when they could schedule time to put the right cpu back in, and asking me whether they wanted to keep the same ticket or a different one:


Thanks for this. I don't care if it's this ticket or another one. Having a senior technician to help sounds good, and I wonder if you could also tell me what you plan to do - are you going to put back my original chassis + cpu or try to swap in my old cpus into this chassis? Or are you just going to see what's available at the time?

The cavalier swapping of mislabelled parts after a misdiagnosis of the original problem points to more than a one-off glitch, particularly in light of previous errors I've had with this server - it sounds to me like a you've got a bigger problem and having a few extra hands around doesn't convince me that you've addressed it. 

What I have experienced is that you are claiming and charging for a premium service and delivering it like a bargain basement shop.


We will check available options prior to starting work during the maintenance window. 
We are currently thinking we would like to avoid the old chassis in case there are any SCSI issues and move the disks to another, tested chassis. As an option, we could add a CPU to the current server. 
If you have any preference on these options, we will make it the priority. 
I apologize again for the mistakes made, and the resulting downtime you have experienced. 

Is it just me, or did they just confirm what I was afraid of?

Popular posts from this blog

Me and varnish win against a DDOS attack.

This past month one of my servers experienced her first DDOS - a distributed denial of service attack. A denial of service attack (or DOS) just means an attempt to shut down an internet-based service by overwhelming it with requests. A simple DOS attack is usually relatively easy to deal with using the standard linux firewall called iptables.  The way iptables works is by filtering the traffic based on the incoming request source (i.e., the IP of the attacking machine). The attacking machine's IP can be added into your custom ip tables 'blacklist' to block all traffic from it, and it's quite scalable so the only thing that can be overwhelmed is your actual internet connection, which is hard to do.

The reason a distributed DOS is harder is because the attack is distributed from multiple machines. I first noticed an increase in my traffic about a day after it had started - it wasn't slowing down my machine, but it did show up as a spike in traffic. I quickly saw that…

Upgrading to Drupal 8 with Varnish, Time to Upgrade your Mental Model as well

I've been using Varnish with my Drupal sites for quite a long while (as a replacement to the Drupal anonymous page cache). I've just started using Drupal 8 and naturally want to use Varnish for those sites as well. If you've been using Varnish with Drupal in the past, you've already wrapped your head around the complexities of front-end anonymous page caching, presumably, and you know that the varnish module was responsible for translating/passing the Drupal page cache-clear requests to Varnish - explicitly from the performance page, but also as a side effect of editing nodes, etc.

But if you've been paying attention to Drupal 8, you'll know that it's much smarter about cache clearing. Rather than relying on explicit calls to clear specific or all cached pages, it uses 'cache tags' which require another layer of abstraction in your brain to understand.

Specifically, the previous mechanism in Drupal 7 and earlier was by design 'conservative' …

CiviCRM's invoice_id field and why you should love the hash

I've been banging my head against a distracted cabal of developers who seem to think that a particular CiviCRM core design, which I'm invested in via my contributed code, is bad, and that it's okay to break it.

This post is my attempt to explain why it was a good idea in the first place.

The design in question is the use of a hash function to populate a field called 'invoice_id' in CiviCRM's contribution table. The complaint was that this string is illegible to humans, and not necessary. So a few years ago some code was added to core, that ignores the current value of invoice_id and will overwrite it, when a human-readable invoice is generated.

The complaint about human-readability of course is valid, and the label on the field is misleading, but the solution is terrible for several reasons I've already written about.

In this post, I'd like to explain why the use of the hash value in the invoice_id field is actually a brilliant idea and should be embrac…