A Fresh Coat of Paint

I spent some time remodeling the old digs here, adding a fresh coat of paint in the form of a new theme, new hosting, and a shiny new SSL lock.

The new paint comes in the form of the Swiss Jekyll Theme, which I just thought looked like fun while keeping the site readable. The only configuration I did was to switch the theme color to orange, just to brighten things up a bit. And I like orange. I decided a while back that designing my own theme comes at the price of me not writing as much, because I tend to spend too much time tweaking the theme. Now, I search around and browse through twenty or thirty themes till I find a few that I like, then I download each of them and drop my content in them, run the site locally and see how my posts look. Most of the time I just go with my gut… do I feel happy when I look at the site? If not, I move on. For now, I’m happy with how this looks.

I was inspired by Jack Baty’s post on moving to Amazon to migrate my own blog over to S3 and CloudFront. I love that the SSL comes along for the ride. Although I don’t need it for anything here, at least Chrome users won’t be subject to a warning when visiting my site. I remember advocating for doing this back in 2002 in a paper I wrote for college, encrypt everything.

Apart from that, the site is still my outlet for talking about technology, design, and culture. It’s a place for me to own my words, and hopefully say something useful. Maybe, someday years from now, my kids or my grandkids will find this and have some idea of who I was, and what I stood for.

The Yale Political Experiment

Link

Yale asked participants in the study to imagine a magic genie and being given either the power to fly, or complete physical invulnerability.

But if they had instead just imagined being completely physically safe, the Republicans became significantly more liberal — their positions on social attitudes were much more like the Democratic respondents. And on the issue of social change in general, the Republicans’ attitudes were now indistinguishable from the Democrats. Imagining being completely safe from physical harm had done what no experiment had done before — it had turned conservatives into liberals.

In both instances, we had manipulated a deeper underlying reason for political attitudes, the strength of the basic motivation of safety and survival. The boiling water of our social and political attitudes, it seems, can be turned up or down by changing how physically safe we feel.

I haven’t read the study, so I can’t vouch for it completely, but this rings true to me. Why do they need guns? Because they are afraid of being harmed. Why do they not want immigrants? Because they are afraid of their way of life changing. Why do they not want social change? Because they are afraid of the unknown. Understandable, given the sources of information.

The Mediacom SSH Issue

Sometimes it’s a miracle the Internet works at all. For the past week or so I’ve been unable to clone, pull from, or push to private Git repositories from either Bitbucket or Github using the normal git clone git@bitbucket.org:whatever/whatevs.git syntax. The problem had the symptoms of a blocked port or a bad network route; I’d issue the command in Terminal and wait, and wait, and wait, and the command would eventually timeout. After a quick look at the Github documentation I tried ssh -T git@github.com which also timed out and confirmed my suspicions. The ssh protocol was not getting through, but VPN and normal web traffic was.

My first thought was that the issue was probably related to my new Mediacom modem. For my wife’s sake, we recently went back to cable, and when the installer came out to the house he replaced my modem with a newer, fancier one. I don’t think I needed the upgrade, although my speed has been bumped up a bit since the new one was put in, so whatever. Along with the new modem the installer was insisting on setting up “home networking”, which is Mediacom’s way of charging you to use their crappy wireless built into the modem. I explained to the installer that not only do I not need his assistance setting up my network, but that I also have invested in a very nice Eero mesh networking setup that was far better than anything Mediacom could give me.

This resulted in a few phone calls between the installer and his supervisor. The installer informed me that he couldn’t turn off home networking or I’d be charged an additional $40 per month. That seemed ridiculous to me, but again, whatever, I asked the installer to just make sure my wireless worked the way it did before he came out. He obliged, my speed was good, and he finished setting up cable and left.

It wasn’t for a day or two that I noticed the ssh problem. It was easy enough to verify that the issue was isolated to my home network using the Instant Hotspot ability of my iPhone. I verified that ssh to my repositories still worked over cellular, and decided to get in touch with Mediacom. But, knowing what I know of them, I figured I’d better cover all my bases and also get in touch with Eero.

I thought I might need to talk to them because I have “Advanced Security” turned on in Eero Plus. This, among other things, checks outbound traffic for known malicious destinations and blocks it. I wondered if it might mistakenly be blocking my SSH traffic, but after contacting Eero support on Twitter they verified that was not the case. They did however point to what proved to be the problem. Eero advised me to contact Mediacom and make sure that the modem was in bridge mode and that we were not in a double-NAT setup. After checking my Eero’s networking settings I saw that it had a private IP address and knew that had to be the case.

Then I waited a week to get in touch with Mediacom because I figured explaining all this to tech support would be a nightmare.

Turns out, it wasn’t so bad. The first person I talked to had no idea what was going on and sent me over to “Tier 2”. Tier 2 helpfully explained that they do not offer ssh service, which I was fine with because I didn’t want to ssh to Mediacom. Then my call was disconnected while I was on hold, so I called back. The third person I talked to understood exactly what I needed. He was able to turn off the wireless networking on my modem, switch it over to be just a basic modem, reboot it, and just like that outbound ssh worked again. The first two support representatives met my expectations, the third exceeded them. I should probably have higher expectations for Mediacom, but here we are.

I’m glad to have everything working again. Not being able to clone repositories using the standard git+ssh protocol was really cramping my style. Hopefully I won’t have to change anything about this setup I’ve got now for a few more years. I suppose the next thing I look into will probably be ipv6.

Dropbox and AWS

Dropbox apparently saved a boatload of money by moving their infrastructure off of AWS and building out their own data centers. Taken at face value this might seem like a strike against AWS, but the way I see it this was the only way Dropbox was going to be able to differentiate themselves as a service. Their job is to provide cloud storage, something AWS can do easily, but if that’s the only thing you are getting from AWS, it’s going to cost an arm and a leg. This news once again got me thinking about the industry as a whole, my place in it, and how I think about cloud services.

If you’ve been in the technology industry for a while it’s easy to think about cloud services like AWS as just another data center. A place where you can spin up a new virtual machine and manage your server instance just like you would if you were racking hardware or using VMWare. AWS will happily let you do that, but you’ll be missing out on the advantages that thinking about the cloud as a development platform offers.

The biggest innovation from AWS is that you only ever pay for what you use. Compare this with managing a data center, where you have to purchase enough hardware to handle peak utilization for the next several years, and then watch as your investment sits mostly underutilized for most of the time. You also have to make an educated guess about expansion for the next few years. Did you buy enough storage? How about memory? Will you need to purchase more in the future? How much of an interruption will that be if you do? Cloud services take care of all of that by only providing what’s needed when you need it, as long as you take advantage of what the platform offers. 

Over the years I’ve noticed a trend towards more simplification in the systems that I build and design. In general, the fewer moving parts in a system the better. It will be more reliable, and easier to troubleshoot when something does go wrong. Unfortunately, I’ve also watched as the industry went in the opposite direction, building card houses out of frameworks and management systems barely out of their infancy. It may be that I’m jaded by my experience, or it may be that I’m just a grumpy old man, but whenever I hear about someone building out a Kubernates infrastructure to run their Docker containers in the cloud I get the suspicion that someone along the way got a case of the clevers1. Is that necessary? Is it really?

I should specify that the fewer moving parts in my part of the systems I build the better. Amazon, Google, and Microsoft undoubtedly have several layers of gears running their underlying systems, but the trick here is that they use those same layers for thousands of customers running millions of workloads. For example, my favored system uses very small instances in an autoscaling group to automatically add and remove capacity as needed to reflect the ebb and flow of normal business traffic. If sales sends out a big promotional email, the system will recognize the increase in traffic and spin up enough instances behind the load balancer to be able to handle it. When traffic dies back down, the system terminates the additional instances. All transparently behind the scenes. The configuration for the build is simple enough, and the documentation for what each component does is straight-forward. What’s more, Amazon themselves use the same systems to build out higher level services.

Growing your instance size or changing it’s type is also a simple matter. If the base size is too small or too big, a simple adjustment to the CloudFormation template will automatically create a new instance, deploy the latest revision to the application to it, and shut down the old one. Building on the massive scale of AWS, it’s virtually guaranteed that you’ll never run out of resources. Also, unlike hardware in a data center, Amazon’s instances tend to get both faster and less expensive with each new release. Simply upgrading your instance type can save time and money.

For a lot of business out there that aren’t Dropbox, AWS can provide you a path to simplification of your infrastructure where you can build up secure, scaleable, and cost effective environment. If this sounds appealing, but you need someone to handle the complexity for you, get in touch, maybe we should talk.

  1. Much respect, Mr. Poush. 👊 ↩︎

A New macOS

I’ve heard several people on podcasts or blog posts claim that they’d like to see Apple hold off on new features that nobody wants and just fix the existing bugs in the Mac. This claim is normally followed up with a missive that they can’t imagine what Apple could add to the Mac at this point anyway, since macOS is a stable, mature operating system. Well, I can think of a few things.

Let’s start with the Mac’s management of information. There is, supposedly, artificial intelligence baked into the Mac in the form of Siri, an AI assistant that is supposed to be able to answer questions and perform basic tasks. Basic, as in “far too basic”. I’ve yet to find a way that the AI actually helps me get anything done. Siri has hooks into all of the built-in apps that manage my information: Notes, Contacts, Reminders, Calendar, Mail, Spotlight, and the file system, but doesn’t help me find relevant information unless I specifically ask her for it. What if Siri could suggest additionally relevant information with a simple swipe of the trackpad? The “Today” view in the Mac’s right-hand sidebar would be a perfect place to show relevant files, folders, emails, contacts, notes, or tasks based on textual analysis and AI.

If you have an email selected in Mail, you could swipe over from the right and see a list of other information related to that email. Likewise if you select an event in Calendar, or a Note, etc… Eagle-eyed readers will most likely note the similarities between my suggestion and the fantastic DEVONthink. That’s no accident. I’ve felt for a long time that the AI at the core of DEVONthink would be spectacular if built into macOS. In DEVONthink, selecting a file and clicking on their little top hat icon will show you a list of related documents and suggest an appropriate group, or folder, for the document. I’d like to see the Mac sherlock that bit of functionality and expand it to the entire system.

Speaking of things that should be system-wide, how about tags? I suggest the next version of macOS come with the ability to tag everything. Let’s put tags in notes, todos, calendar events, contacts… the entire suite of productivity apps built into the Mac. The whole shebang, as they say. And, to be clear, let’s make sure they are the same tags, not siloed tags unique per application. That wouldn’t be nearly as useful in the big macOS data soup.

Next, you know how Apple bought the iOS app “Workflow”? How about we stop messing around and ship Hazel as a built-in part of the Mac. It already fits right in, running invisibly in the background and acting on rules set in the preference pane. Of course, if Apple bought it we could put rules for everything in Hazel. We could have Hazel triggering rules not just for files and folders, but for Mail, Calendar, Reminders… are you getting it?

Finally, let’s build in real security into every part of the Mac. I suggest private key encryption for everything before it is sync’d off of the machine. Transmitting files over SSL is fine. Storing files on the server on an encrypted disk is also fine. However, the files themselves are not encrypted, which means (as I understand it) that if the iCloud server is running the files are available. The encryption on the server just means that if someone tried to run off with the hard drive they couldn’t get into it. That’s why you can get to them from the web. Your login to icloud.com does not unencrypt the files, it simply allows you access to them. End-to-end private key encryption would be the next step that Apple could take to ensure that their customer’s information is safe and secure, even when transmitted across the Internet.

The purpose of a computer is to be a bicycle for the mind, a way to not only manage information, but help you make better decisions, cook better meals, make better deals, build better software, write better papers. Whatever you do, a computer should seamlessly help you do it better, and get out of your way to let you do it.

If the Mac is the pickup truck, or SUV, of the computing world, and the heaviness of macOS is what lets iOS stay light, then my suggestion is to lean into that philosophy and make the Mac the absolute best tool for managing information. A professional grade tool for professional daily use. And, obviously, since this is us, do it with style.

Why 2017 Was the Best Year in Human History - The New York Times

Link

We all know that the world is going to hell. Given the rising risk of nuclear war with North Korea, the paralysis in Congress, warfare in Yemen and Syria, atrocities in Myanmar and a president who may be going cuckoo, you might think 2017 was the worst year ever.

But you’d be wrong. In fact, 2017 was probably the very best year in the long history of humanity.

It’s good to get some perspective, get away from the talking heads on TV and dire predictions of gloom and doom and realize that, by the numbers, the world is actually getting better, not worse. Can and should we do more? Absolutely. But let’s not let the weight of what’s not gone right burden us so much that we can see what has.

The Future of DevOps is AI

The work of systems administration, that is, racking new hardware, running cables, and loading operating systems, is quickly becoming eclipsed by devops. Servers come from the factory ready to rack, and the base operating system has become nearly meaningless in the context of running applications thanks to Docker. All you need is a baseline Linux install, the specifics of what each application needs to run are taken care of inside the Docker container.

If you are running your workloads in the cloud1, the need for a dedicated sysadmin is even more redundant2, since both the hardware and the base operating system are outsourced to AWS, Microsoft, or Google. However, bridging the gap between deploying applications to production and writing the application code sits devops, which for the past several years has been a fantastic opportunity for sysadmins to branch out into new and interesting ways. That opportunity is not going to last forever though. The core tasks of a devops engineer are repeatable, and the current complexity that devops now handles is moving more and more towards simplification.

In devops, there is a starting point, a set of tasks, and a specific end goal to achieve. Primarily, a devops engineer will take code from a repository and build out the systems to test and deploy that code to production in a secure, scaleable environment. That’s what I do anyway. History tells us that any repeatable, programmable task will be automated by a machine sooner or later, and I think we are starting to see what those new machines will look like.

Jason Kottke posted a link to Chess.com earlier today about how Google’s AlphaZero AI became the best chess player in history in four hours. It wasn’t loaded with a chess program, the AI taught itself how to play, and became unbeatable. Putting Skynet theories temporarily aside, how big of a step is it from there to automating devops? An AI could be trained on Github, given AWS or GAE as a deployment ground, and given an end goal of each application up and running.

Programmer friends who I floated this idea to in the past had mixed replies. Some claimed that the AI necessary to do this would be beyond anything currently in our capabilities. Others said that it might be possible. I think the writing on the wall is clear. It is possible, and I imagine it’s only a matter of time before the big three cloud providers each deploy their own version of automatic code deployment. It won’t look like much from outside, the only configuration they’ll need is access to your repository, and the AI will build up everything else necessary.

The models exist, the artificial intelligence exists, and the need for the service exists. DevOps is a career path with an expiration date. If that date is 10, 20, or 30 years down the road, I can’t tell, but it’s closer than any of us working in the industry would care to consider.

  1. Which you really should be. ↩︎

  2. Well, that is if you are running your applications in the cloud the way the cloud was meant to be used. ↩︎

The Fight for Public Land in Montana's Crazy Mountains | Outside Online

Link

In the fall of 2016, Rob Gregoire, a hunter and nearly life-long Montanan, won a state lottery for a permit to take a trophy elk in the Crazy Mountains, which rise from the plains about 60 miles north of Yellowstone National Park. Landowners around the mountains were charging about $2,000 for private hunts on their ranches. “That’s just not what I do, on principle,” Gregoire says. So he found a public access corridor that would take him into prime Crazies elk country—the federal land covered by the permit, which in total cost about $40.

Such trails have led into the Crazies for generations. And disputes between landowners and those who would cross their properties on these trails reach back nearly that far, too. By 2016, the trailhead Gregoire found was “the last non-contested public access point on the 35-mile-long eastern flank of the Crazy Mountains,” he would write later to his U.S. senators.

Yet even on what Gregoire thought was a public throughway, the Hailstone Ranch had posted game cameras and signs claiming that the Forest Service didn’t have an easement to use the segment that crossed the private property. After consulting with the Forest Service, Gregoire decided to hike the route anyway. He used an app to stay on trail where it seemed faint, to make sure he kept to public land. Then one evening as he returned toward the trailhead after an unsuccessful hunt, Gregoire found a deputy sheriff from Sweet Grass County waiting for him. The deputy handed Gregoire a ticket for criminal trespass. After court costs, the ticket cost $585.

Ridiculous. When republicans talk about taking land away from the government and giving it “back to the people”, what they really mean is take publicly owned land and give it to wealthy landowners. These landowners then then put up no trespassing signs. How is that giving land back to the people again?