I like money. There, I just wanted to get that off my chest.

Well, there’s actually a little more than that. My approach to handling money, or more specifically the money set aside for retirement has recently changed pretty radically. Formerly I treated my retirement plan like, say, cleavage. Yeah sure let’s go with that. That is to say, it’s okay to steal a glance every now and again but not something you should fixate on despite urges to the contrary.

That was basically my plan. Put some amount in a retirement account, let it do its thing and when you retire hopefully something good happened. Like a time capsule you open up years later and hope it’s not filled with Pogs. I’d also heard that you should try to save 15% of your income which, had I actually taken to heart as I began my professional career, might have made a difference. Further I’ve never been what you’d call frugal, nor was I a spendthrift. I just didn’t give a lot of thought to a financial plan apart from throw a little money over your shoulder into a retirement account and try to pay down debt whenever possible. I like simple and stress free, but now realize that perhaps I was just postponing the eventual complexity and stress when there’s probably a happy medium assuming it’s addressed with enough time.

So, we recently reached a significant financial milestone and it started me thinking, you know retirement is really not that far away. I’ll be 62 in 14 years. My youngest child is 16. His birth really doesn’t seem that long ago and in less time that that I’ll be SIXTY-TWO! So this is a thing that’s going to happen and I’m starting to wonder how my (finger quotes) “plan” (finger quotes) is going to work out. I’m sure you’ll be surprised to know that, after careful deliberation, I realized that the original plan is a bit of a disaster.

I could go into detail, and in fact I may at some later date, on exactly how I plan to address this (basically this), but I wanted to just throw this out there: if you don’t have a real retirement plan and some means of tracking your progress, you really owe it to yourself to sit down and take a long hard look and attempt to formulate one. There are a number of sites out there that can help you determine first what you’ll likely need to cover annual expenses, next what you’ll need to plan on saving to cover these expenses, and also how best to put all of this together in a way that gives you the best chance at success. I’m partial to Personal Capital myself, but Betterment also has some nice tools available. I’m sure there are many others. If you provide honest and conservative estimates with regard to your financial needs the results may surprise you. You may, as we did, need to take steps to radically change your lifestyle, saving, and spending habits to get back on track.

I keep thinking about this: before I started paying attention I would have thought $1,000,000 was a lot of money. I mean it is, but it’s really not that much money when you consider that the sort of standard rule of thumb is that you should plan on drawing your retirement saving down by no more than 4% annually. That means if you want to be able to count on paying yourself $50,000 a year (which is less than the average household income) then you should plan on a nest egg totaling $1,250,000! This was a bit of a revelation to me. I’d simply not given it much thought. But now I realize that I would no longer categorize a millionaire as super wealthy. It’s really what every single household needs to work towards if they want to go into retirement with any degree of comfort.

Aw Funk

Over the past year or two I have, on a number of occasions, fired up MarsEdit with every intention of explaining my absence from this site and social media during that time. And each time I sit for a few moments, fingers hovering over the keyboard ready to burst forth with all that I wish to say… and each time generally ends with a “whatever” and / or the realization that I don’t have the time or energy to defend my opinions on a whatever topic and those with whom I disagree are unlikely to be swayed anyway so what’s the point? So much of social media and the internet these days seems to be to make sure all are well aware of your opinions and feelings towards those that don’t share them.


Today I’m looking over this site and realizing that I did not post a single thing here in 2016. I posted 8 or fewer tweets in seven out of the last twelve months. My facebook usage has seen a similar decline. And it’s not that I don’t have things to say or that nothing’s been going on. There’s been plenty. Those close to me know that these last couple years have indeed been interesting times. I can’t put an exact date on when this “funk” first began to settle in, but certainly by the fall of 2015.

I don’t like it. I’m not sure what to do about it or if this represents the first step of a return to form for me. Honestly it was seeing Manton’s thing on Kickstarter and thinking, man it sure would be cool to have a thing that I could get super fired about up again. Or maybe at this point step one is just writing down how cool it would be to have a thing to get super fired up about again. It feels like something’s gotta give for sure.

Genealogy Update: Layne Edition

A few months back I attended a reunion for the descendants of Henry Miller Layne where the desire to document my ancestry was reawakened after a long long slumber. My initial “research” consisted of little more than copying research originally completed by Floyd Benjamin Layne in the 1950s for his book Layne-Lain-Lane Genealogy. My focus then was geared more towards finding as many relatives as possible with little concern given to documenting my sources or trying to learn who my ancestors really were.

I wanted to change that this time around, starting with my oldest Layne ancestor, John Hiram Lain of Virginia. Given the amount of information that is available today, perhaps I could learn a bit more than old Floyd did back in the 1950s. I began by trying to learn more about John Hiram’s origins. Exactly when did he arrive in middle Tennessee and why? I didn’t make a great deal of progress just focusing on John, so I then starting learning what I could of his children hoping that perhaps that would turn up more clues as to the family origins. I discovered a good bit and thoroughly documented all I learned of John and his first two children, David and Elizabeth, but mostly what I learned was that Floyd Benjamin’s book is riddled when inaccuracies.

Before moving forward with his next child I decided to take a step back and fill in and correct some of the information from the book at sort of a higher level. I needed sort of a sturdier platform from which to do the more detailed work. When I began I had somewhere in the neighborhood of 350 Laynes with no birth information of any kind. This made it hard when searching my tree to figure out, for example, which of the 35 John, 34 William, or 17 Daniel Laynes I was trying to locate.

After several weeks of work I’ve updated information on hundreds of Laynes and Layne relations and I’m now only missing birth year information on only 81 Laynes. Now that that is complete I intend to get back to my original goal which was learning and documenting everything I can about John Hiram Layne and his children. Although I’m going to skip ahead next to Isaac Layne because Floyd Benjamin’s book appears to have a pretty significant mistake that I need to correct.

Hopefully I’ll have Isaac documented soon, but in the meantime if you find any errors or omissions, kindly point them out and I’ll correct them as soon as I can.

WWDCs Past (2015 edition)

Seems like this years WWDC announcement should but popping up soon. With that in mind I figured I’d post an updated WWDC announcement table. It’s getting to be a whole tradition now or something. Sort of goes with my tradition of not getting a ticket for the last few years :/

Year Announce Date Announce Day of Week Conference Date Week In June (full) Days Notice Time to Sell Out
2005 Feb 15, 2005 Tuesday Jun 6, 2005 1st 111 days n/a
2006 Mar 8, 2006 Wednesday Aug 7, 2006 n/a 152 days n/a
2007 Feb 7, 2007 Wednesday Jun 11, 2007 2nd 124 days n/a
2008 Mar 13, 2008 Thursday Jun 9, 2008 2nd 88 days 60 days
2009 Mar 26, 2009 Thursday Jun 8, 2009 1st 74 days 30 days
2010 Apr 28, 2010 Wednesday Jun 7, 2010 1st 40 days 8 days
2011 Mar 28, 2011 Monday Jun 6, 2011 1st 70 days 12 hours
2012 Apr 25, 2012 Wednesday Jun 11, 2012 2nd 47 days 1h 43m
2013 Apr 24, 2013 Wednesday Jun 10, 2013 2nd 47 days 2 minutes
2014 Apr 3, 2014 Thursday Jun 2, 2014 1st 60 days lottery
2015 April 14, 2015 Tuesday Jun 8, 2015 1st 54 days lottery
Announced on
Sun Mon Tue Wed Thu Fri Sat
0 times 1 times 2 times 5 times 3 times 0 times 0 times

Swift Optional Chaining Performance

Optional chaining in Swift offers a convenient mechanism for testing an optional value embedded in a statement without having to bother with messy binding or dangerous implicit unwrapping. It’s basically just a bit of syntactical sugar that internally converts something like this:

foo?.bar = 42

into this:

if let unwrappedFoo = foo {
  unwrappedFoo = 42

This is fine when used in moderation. However I occasionally run across code like this:

foo?.bar = 42
foo?.baz = 3.14

This makes me a little itchy. My worry has been that the compiler then generates code as if it encountered the following:

if let unwrappedFoo = foo { = 42
if let unwrappedFoo = foo {
  unwrappedFoo.baz = 3.14
if let unwrappedFoo = foo {

You would (hopefully) never write something like this but that’s how the compiler is going to interpret all those chained optionals… Or is it? Maybe the compiler is smart enough to figure out what is going on here and I should just relax and let it do its thing?

Nah, I need to know what’s going on. So I cobbled together a few contrived examples in Xcode and ask it to generate some assembly for me… Except Xcode can’t yet show you the assembly for a Swift file. Sigh. Okay well Google can probably tell me how to look at the assembly and sure enough I find this lovely article (which, by the way, also introduced me to Hopper which is pretty awesome).

Armed with Hopper and a bit of knowledge I set about examining the assembly produced with a variety of techniques and optimization levels with my sample code.

My first test was an unoptimized test of a function using optional binding versus the equivalent using optional chaining (letTest vs chainTest in the sample code) and which yielded assembly with the following lengths*.

Unoptimized Opcode Count
Optional Binding 138
Optional Chaining 248

As I suspected, the optional chaining was much less efficient. Not really surprising, until I examined the same functions with optimizations turned on.

Optimized Opcode Count
Optional Binding 87
Optional Chaining 82

Wait, what? The compiler was somehow smart enough to figure out what I was doing and doesn’t just match the optional binding approach, it beats it. Looking over the assembly, it appears the optional binding approach included an extra retain / release.

After the first batch of results the relaxed approach is starting to look better. Maybe I just hammer on the keyboard and the compiler somehow just figures everything out for me. But first another test. This sample is identical except these are methods instead of global functions. First the unoptimized results.

Unoptimized Opcode Count
Optional Binding 147
Optional Chaining 195

Actually a bit more respectable here than the global counterparts, but optional binding is still much more efficient. And the optimized results…

Optimized Opcode Count
Optional Binding 102
Optional Chaining 132

Interesting. This is what I expected originally. But why the difference between a method and the function? I imagine because the variables could have setter functions or observers which could alter the value of tObj, therefore the compiler can’t be confident that it does not have to test the value of tObj for each assignment.

In the end, using a series of optionally chained statements is not horrible and in at least one case actually faster than optional binding, but personally I’m going to continue to do what I can to provide those additional clues to compiler and future maintainers of my code (including myself) as to my intent where practical.

Of course this just goes for a series of optionally chained statements. If I’m only evaluating that optional once (maybe even twice if I’m feeling naughty) then optional chains are perfect. Any more than that though and it’s getting wrapped in an optional binding.

*Using the length of the generated assembly as a measurement of efficiency is not always the best idea. The compiler could be unrolling loops or any number of optimization techniques that don’t end up generating less code. However this example is pretty simple and serves as a decent yardstick here.