Lorin Hochstein's Blog, page 29

August 22, 2015

Review: Software Developers’ Perceptions of Productivity

I wrote a paper review for It Never Work In Theory.


 •  0 comments  •  flag
Share on Twitter
Published on August 22, 2015 16:35

June 28, 2015

Over-engineering as a safety technique


The limitation of over-engineering as a safety technique is that the extra weight and volume may begin to contribute to the very problem that it was intended to solve. No-one knows how much of the volume of code of a large system is due to over-engineering, or how much this costs in terms of reliability. In general safety engineering, it is not unknown for catastrophes to be caused by the very measures that are introduced to avoid them.



How Did Software Get So Reliable Without Proof?, C.A.R. Hoare


 •  0 comments  •  flag
Share on Twitter
Published on June 28, 2015 10:57

April 30, 2015

Ansible: Up and Running is out!

My book on Ansible is now done! You can��get��the ebook today, print edition should be out in a few weeks.



On Amazon
On O’Reilly shop
Official book website

 •  0 comments  •  flag
Share on Twitter
Published on April 30, 2015 18:50

February 15, 2015

New paper review

I reviewed a paper��at Never Work In Theory.


 •  0 comments  •  flag
Share on Twitter
Published on February 15, 2015 07:21

November 9, 2014

Ansible: Up and Running

I’m writing a book about Ansible. You can grab a free preview version of the book that contains the first three chapters.


Ansible-Ebook


 •  0 comments  •  flag
Share on Twitter
Published on November 09, 2014 14:50

October 9, 2014

September 30, 2014

Cloud software, fragility and Air France 447

The Human Factor by William Langewiesche is an account of how Air France Flight 447 crashed back in 2009. It’s a great piece of long-form journalism.


[Recently deceased engineer Earl] Wiener pointed out that the effect of automation is to reduce the cockpit workload when the workload is low and to increase it when the workload is high.


That’s basically Nassim Nicholas Taleb’s definition of fragility: where the potential downside is much greater than the potential upside.


But this is what really struck me:


[U. Michigan Industrial & Systems Engineering Professor Nadine] Sarter said, “… Complexity means you have a large number of subcomponents and they interact in sometimes unexpected ways. Pilots don’t know, because they haven’t experienced the fringe conditions that are built into the system. I was once in a room with five engineers who had been involved in building a particular airplane, and I started asking, ‘Well, how does this or that work?’ And they could not agree on the answers. So I was thinking, If these five engineers cannot agree, the poor pilot, if he ever encounters that particular situation … well, good luck.””


We also build software by connecting together different subcomponents. We know that the hardware underlying these subcomponents can fail, and in order to provide reliability for our customers we must be able to deal with these failures. If we’re sophisticated, we turn to automation: we try to build fault-tolerant systems that can automatically detect failures and compensate for them.


But the behavior of these types of systems is difficult to reason about, precisely because they take action automatically based on self-monitoring. These systems can handle hard failures of individual components: when something has obviously failed. But if it’s a soft failure, where the component is still partially working, or if two independent components fail simultaneously, then the system may not be able to handle it. Adding automation to handle failures introduces new failure modes even as it eliminates old ones, and the new ones may be much harder for an operator to understand.


Consider the AWS outage of October 2012, which took down multiple websites that deploy on Amazon’s cloud. The outage was a result of:


1. A hardware failure on a data collection server

2. A DNS update not propagating to all internal DNS servers

3. A memory leak bug in an agent that monitors the health of EBS (storage) servers


The agents collect operational data from the storage servers and transfer it to data collection servers. Some agents couldn’t reach the collection servers because of a stale DNS entry. Because the agents also had a memory leak bug, they consumed too much memory on the storage servers.


Lack of sufficient memory is a good way to create a soft failure: the component still works, but in a degraded fashion. These memory-poor servers couldn’t keep up with requests, and eventually became stuck. The system detected the stuck servers (a hard failure), and failed over to healthy servers, but there were so many stuck servers that the healthy servers couldn’t keep up, and also became stuck.


The irony is that this failure is entirely due to the monitoring subsystem, which is intended to increase system reliability! It happened because of the co-incidence of a hardware failure in one component of the monitoring subsystem (a data collection server) and a software bug in another component of the monitoring subsystem (data collection agent).


We can’t avoid automation. In the case of the airlines, the automation has significantly increased safety, even though it increases the chances of incidents like Air France 447. For cloud software, this type of fault-tolerance automation does result in more reliable systems.


But, while these systems mean failure is less likely, when it does happen, it’s much more difficult to understand what’s happening. The future of cloud software is systems that fail much less often, but much harder. Buckle up.


 •  0 comments  •  flag
Share on Twitter
Published on September 30, 2014 18:26

September 20, 2014

Estimating confidence intervals, part 6

Our series of effort-estimation-in-the-small continues. This feature took a while to complete (where “complete” means “deployed to production”). I thought this was the longest feature I’d worked on, but looking at the historical data, there was another feature that took even longer.


plot


Somehow, the legend has disappeared from the plot. The solid line is my best estimate of time remaining each day, and the dashed line is the true amount of time left. The grey area is my 90% confidence interval estimate.


plot2


As usual, my “expected” estimate was much too optimistic. I initially estimated 10 days, where it actually took 17 days. I did stay within my 90% confidence interval, which gives me hope that I’m getting better at those intervals.


When I started this endeavor, my goal was to do a from-scratch estimate each day, but that proved to require too much mental effort, and I succumbed to the anchoring effect. Typically, I would just make an adjustment to the previous day’s estimate.


Interestingly, when I was asked in meetings how much time was left to complete this feature, I gave off-the-cuff (and, unsurprisingly, optimistic) answers instead of consulting my recorded estimates and giving the 90% interval.


 •  0 comments  •  flag
Share on Twitter
Published on September 20, 2014 20:55

August 24, 2014

Good to great

A few months ago I read Good to Great, a book about the factors that led to companies making a transition from being “good” to being “great”. Collins, the author, defines “great” as companies whose stock performed at least three times better than the overall market over at least fifteen years. While the book is ostensibly about a  research study, it feels packaged as a set of recommendations for executives looking to turn their good companies into great ones.


The lessons in the book sound reasonable, but here’s the thing: If Collins’s theory is correct, we should be able to identify companies that will outperform the market by a factor of three in fifteen-years time, by surveying employees to see if they meet the seven criteria outlined in the book.


It has now been almost thirteen years since the book has been published. Where are the “Good to Great” funds?


 •  0 comments  •  flag
Share on Twitter
Published on August 24, 2014 19:26

Estimating confidence intervals, part 5

This was a quick feature. I accidentally finished on day 4 before I estimated, so I just set the max/min/expected value to 1.


 


Effort estimate


 


Error in effort estimate


 •  0 comments  •  flag
Share on Twitter
Published on August 24, 2014 17:28