jb… a weblog by Jonathan Buys

The Manual with Tim Walz

September 21, 2024

Love this guy. Patrick Rhone calls him folksy, I agree.


Loading and Indexing SQLite

October 19, 2023

What a difference a couple of lines of code can make.

I recognize that databases have always been a weak point for me, so I’ve been trying to correct that lately. I have a lot of experience with management of the database engines, failover, filesystems, and networking, but too little working with the internals of the databases themselves. Early this morning I decided I didn’t know enough about how database indexes worked. So I did some reading, got to the point where I had a good mental model for them, and decided I’d like to do some testing myself. I figured 40 million records was a nice round number, so I used fakedata to generate 40 million SQL inserts that looked something like this:

INSERT INTO contacts (name,email,country) VALUES ("Milo Morris","pmeissner@test.tienda","Italy");
INSERT INTO contacts (name,email,country) VALUES ("Hosea Burgess","kolage@example.walmart","Dominica");
INSERT INTO contacts (name,email,country) VALUES ("Adaline Frank","shaneIxD@example.talk","Slovenia");

I saved this as fakedata.sql and piped it into sqlite3 and figured I’d just let it run in the background. After about six hours I realized this was taking a ridiculously long time, and I estimated I’d only loaded about a quarter of the data. I believe that’s because SQLite was treating each INSERT as a separate transaction.

A transaction in SQLite is a unit of work. SQLite ensures that the write to the database is Atomic, Consistent, Isolated, and Durable, which means that for each of the 40 million lines I was piping into sqlite3, the engine was ensuring that every line was fully committed to the database before moving on to the next line. That’s a lot of work for a very, very small amount of data. So, I did some more reading and found one recommendation of explicitly wrapping the entire load into a single transaction, so my file now looked like:

BEGIN TRANSACTION;

INSERT INTO contacts (name,email,country) VALUES ("Milo Morris","pmeissner@test.tienda","Italy");
INSERT INTO contacts (name,email,country) VALUES ("Hosea Burgess","kolage@example.walmart","Dominica");
INSERT INTO contacts (name,email,country) VALUES ("Adaline Frank","shaneIxD@example.talk","Slovenia");

COMMIT;

I set a timer and ran the import again:

➜  var time cat fakedata.sql| sqlite3 test.db
cat fakedata.sql  0.07s user 0.90s system 1% cpu 1:13.66 total
sqlite3 test.db  70.81s user 2.19s system 98% cpu 1:13.79 total

So, that went from 6+ hours to about 71 seconds. And I imagine if I did some more optimization (possibly using the Write Ahead Log?) I might be able to get that import faster still. But a little over a minute is good enough for some local curiosity testing.

Indexes

So… back to indexes.

Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure that holds the field values and a pointer to the record it relates to. Once the index is created it is sorted. This allows binary searches to be performed on the new data structure.

One good analogy is the index of a physical book. Imagine that a book has ten chapters and each chapter has 100 pages. Now imagine you’d like to find all instances of the word “continuum” in the book. If the book doesn’t have an index, you’d have to read through every page in every chapter to find the word.

However, if the book is already indexed, you can find the word in the alphabetical list, which will then have a pointer to the page numbers where the word can be found.

The downside to the index is that it does take additional space. In the book analogy, while the book itself is 1000 pages, we’d need another ten or so for the index, bringing up the total size to 1010 pages. Same with a database, the additional index data structure requires more space to hold both the original data field being indexed, and a small (4-byte, for example) pointer to the record.

Oh, and the results of creating the index are below.

SELECT * from contacts WHERE name is 'Hank Perry';
Run Time: real 2.124 user 1.771679 sys 0.322396


CREATE INDEX IF NOT EXISTS name_index on contacts (name);
Run Time: real 22.129 user 16.048308 sys 2.274184


SELECT * from contacts WHERE name is 'Hank Perry';
Run Time: real 0.003 user 0.001287 sys 0.001598

That’s a massive improvement. And now I know a little more than I did.


The Perfect ZSH Config

August 14, 2023

If you spend all day in the terminal like I do, you come to appreciate it’s speed and efficiency. I often find myself in Terminal for mundane tasks like navigating to a folder and opening a file; it’s just faster to type where I want to go than it is to click in the Finder, scan the folders for the one I want, double-click that one, scan again… small improvements to the speed of my work build up over time. The speed is increased exponentially with the correct configuration for your shell, in my case, zsh.

zsh is powerful and flexible, which means that it can also be intimidating to try to configure yourself. Doubly-so when there are multiple ‘frameworks’ available that will do the bulk of the configuration for you. I used Oh My Zsh for years, but I recently abandoned it in favor of maintaining my own configuration using only the settings that I need for the perfect configuration for my use.

I’ve split my configuration into five files:

  • apple.zsh-theme
  • zshenv
  • zshrc
  • zsh_alias
  • zsh_functions

I have all five files in a dotfiles git repository, pushed to a private Github repository.

The zshenv file is read first by zsh when starting a new shell. It contains a collection of environmental variables I’ve set, mainly for development. For example:

export PIP_REQUIRE_VIRTUALENV=true
export PIP_DOWNLOAD_CACHE=$HOME/.pip/cache
export VIRTUALENV_DISTRIBUTE=true

The next file is zshrc, which contains the main bulk of the configurations. My file is 113 lines, so let’s take it a section at a time.

source /Users/jonathanbuys/Unix/etc/dotfiles/apple.zsh-theme
source /Users/jonathanbuys/Unix/etc/dotfiles/zsh_alias
source /Users/jonathanbuys/Unix/etc/dotfiles/zsh_functions

The first thing I do is source the other three files. The first is my prompt, which is cribbed entirely from Oh My Zsh. It’s nothing fancy, but I consider it to be elegant and functional. I don’t like the massive multi-line prompts. I find them to be far too distracting for what they are supposed to do.

My prompt looks like this:

 ~/Unix/etc/dotfiles/ [master*] 

It gives me my current path, what git branch I’ve checked out, and if that branch has been modified since the last commit.

The next two files, as their names suggest, contain aliases and functions. I have three functions and 16 aliases. I won’t go into each of them here, as they are fairly mundane and only specific for my setup. The three functions are to print the current path of the open Finder window, to use Quicklook to preview a file, and to generate a uuid string.

The next few lines establish some basic settings.

autoload -U colors && colors
autoload -U zmv

setopt AUTO_CD
setopt NOCLOBBER
setopt SHARE_HISTORY
setopt HIST_IGNORE_DUPS
setopt HIST_IGNORE_SPACE

The autoload lines setup zsh to use pretty colors, and to enable the extremely useful zmv command for batch file renaming. The interesting parts of the setopt settings are the ones dealing with command history. These three commands allow the sharing of command line history between open windows or tabs. So if I have multiple Terminal windows open, I can browse the history of both from either window. I find myself thinking that the environment is broken if this is not present.

Next, I setup some bindings:

  # start typing + [Up-Arrow] - fuzzy find history forward
  bindkey '^[[A' up-line-or-search
  bindkey '^[[B' down-line-or-search
  
  # Use option as meta
  bindkey "^[f" forward-word
  bindkey "^[b" backward-word
  
  # Use option+backspace to delete words
  x-bash-backward-kill-word(){
      WORDCHARS='' zle backward-kill-word
  
  }
  zle -N x-bash-backward-kill-word
  bindkey '^W' x-bash-backward-kill-word
  
  x-backward-kill-word(){
      WORDCHARS='*?_-[]~\!#$%^(){}<>|`@#$%^*()+:?' zle backward-kill-word
  }
  zle -N x-backward-kill-word
  bindkey '\e^?' x-backward-kill-word

These settings let me use the arrow keys to browse history, and to use option + arrow keys to move one word at a time through the current command, or to use option + delete to delete one word at a time. Incredibly useful, use it all the time. Importantly, this also lets me do incremental searching through my command history with the arrow keys. So, if I type aws, then arrow up, I can browse all of my previous commands that start with aws. And when you have to remember commands that have 15 arguments, this is absolutely invaluable.

The next section has to do with autocompletion.

# Better autocomplete for file names
WORDCHARS=''

unsetopt menu_complete   # do not autoselect the first completion entry
unsetopt flowcontrol
setopt auto_menu         # show completion menu on successive tab press
setopt complete_in_word
setopt always_to_end

zstyle ':completion:*:*:*:*:*' menu select

# case insensitive (all), partial-word and substring completion
if [[ "$CASE_SENSITIVE" = true ]]; then
  zstyle ':completion:*' matcher-list 'r:|=*' 'l:|=* r:|=*'
else
  if [[ "$HYPHEN_INSENSITIVE" = true ]]; then
    zstyle ':completion:*' matcher-list 'm:{[:lower:][:upper:]-_}={[:upper:][:lower:]_-}' 'r:|=*' 'l:|=* r:|=*'
  else
    zstyle ':completion:*' matcher-list 'm:{[:lower:][:upper:]}={[:upper:][:lower:]}' 'r:|=*' 'l:|=* r:|=*'
  fi
fi

unset CASE_SENSITIVE HYPHEN_INSENSITIVE
# Complete . and .. special directories
zstyle ':completion:*' special-dirs true

zstyle ':completion:*' list-colors ''
zstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#) ([0-9a-z-]#)*=01;34=0=01'
zstyle ':completion:*:*:*:*:processes' command "ps -u $USERNAME -o pid,user,comm -w -w"
# disable named-directories autocompletion
zstyle ':completion:*:cd:*' tag-order local-directories directory-stack path-directories

# Use caching so that commands like apt and dpkg complete are useable
zstyle ':completion:*' use-cache yes
zstyle ':completion:*' cache-path $ZSH_CACHE_DIR

zstyle ':completion:*:*:*:users' ignored-patterns \
        adm amanda apache at avahi avahi-autoipd beaglidx bin cacti canna \
        clamav daemon dbus distcache dnsmasq dovecot fax ftp games gdm \
        gkrellmd gopher hacluster haldaemon halt hsqldb ident junkbust kdm \
        ldap lp mail mailman mailnull man messagebus  mldonkey mysql nagios \
        named netdump news nfsnobody nobody nscd ntp nut nx obsrun openvpn \
        operator pcap polkitd postfix postgres privoxy pulse pvm quagga radvd \
        rpc rpcuser rpm rtkit scard shutdown squid sshd statd svn sync tftp \
        usbmux uucp vcsa wwwrun xfs '_*'

if [[ ${COMPLETION_WAITING_DOTS:-false} != false ]]; then
  expand-or-complete-with-dots() {
    # use $COMPLETION_WAITING_DOTS either as toggle or as the sequence to show
    [[ $COMPLETION_WAITING_DOTS = true ]] && COMPLETION_WAITING_DOTS="%F{red}…%f"
    # turn off line wrapping and print prompt-expanded "dot" sequence
    printf '\e[?7l%s\e[?7h' "${(%)COMPLETION_WAITING_DOTS}"
    zle expand-or-complete
    zle redisplay
  }
  zle -N expand-or-complete-with-dots
  # Set the function as the default tab completion widget
  bindkey -M emacs "^I" expand-or-complete-with-dots
  bindkey -M viins "^I" expand-or-complete-with-dots
  bindkey -M vicmd "^I" expand-or-complete-with-dots
fi

# automatically load bash completion functions
autoload -U +X bashcompinit && bashcompinit


That’s a long section, but in a nutshell this lets me type one character, then hit tab, and be offered a menu of all the possible completions of that character. It is case-insensitive, so b would match both boring.txt and Baseball.txt. I can continue to hit tab to cycle through the options, and hit enter when I’ve found the one I want.

The last section sources a few other files:

[ -f ~/.fzf.zsh ] && source ~/.fzf.zsh
[ -f "/Users/jonathanbuys/.ghcup/env" ] && source "/Users/jonathanbuys/.ghcup/env" # ghcup-env
[ -s "/Users/jonathanbuys/.bun/_bun" ] && source "/Users/jonathanbuys/.bun/_bun"
source /Users/jonathanbuys/Unix/src/zsh-autosuggestions/zsh-autosuggestions.zsh
source /Users/jonathanbuys/Unix/src/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh

If I’m experimenting with Haskell, I’d like to load the ghcup-env variables. If I have bun installed (a way, way faster npm), than use that. The final two sources are for even more enhanced autosuggestions and command line syntax highlighting. So, typos or commands that don’t exist will be red, good commands where zsh can find the executable will be green. The autosuggestions take commands from my history and suggest them, I can type right-arrow to accept the suggestion, or keep typing to ignore it.

Taken together, I’ve been able to remove Oh My Zsh, but keep all of the functionality. My shell configuration is constantly evolving as I find ways to make things faster and more efficient. I don’t consider myself a command line zealot, but I do appreciate how this setup gets out of my way and helps me work as fast as I can think.


p.s. A lot of this configuration was taken from other sources shared around the internet, as well as the zsh documentation. I regret that I haven’t kept references to the origins of some of these configs. If I can find the links I’ll post them here.


Future Work and AI

January 26, 2023

I’ve been trying to wrap my small monkey brain around what ChatGPT will mean in the long run. I’m going to try to think this through here. In many ways the advances we’ve seen in AI this past year perpetuate the automation trend that’s existed since… well, since humans started creating technology. I’ve seen arguments that seem to be on two ends of a spectrum, that the AI is often wrong and unreliable, and we shouldn’t use it for anything important, to AI is so good that it’s going to put us all out of jobs. As with most truths, I think the reality is somewhere in between.

It’s my opinion that jobs that AI can replace, it probably will replace a lot of. But not all. Referring back to our discussion about the current state of Apple news sites, if the site is a content farm pumping out low-value articles for hit counts and views, I can see AI handling that. If your site is well thought out opinions and reviews about things around the Apple ecosystem, that I think will be safe. Because it’s the person’s opinion that gives the site value.

For more enterprise-y jobs, I could see fewer low and mid-level developers. Fewer managers, fewer secretaries, fewer creatives. Not all gone, but certainly less than before. If your job is to create stock photos and put together slide shows, you might want to expand your skill set a bit.

I think… the kind of jobs that will survive are the type that bring real value. The kind of value that can’t be replicated by a computer. Not just the generation of some text or code, but coming up with the why. What needs to be made, and why does it need to be made?

Maybe AI will help free us up to concentrate on solving really hard problems. Poverty, clean water, famine, climate change. Then again, maybe it’ll make things worse. I suppose in the end that’s up to us.


Iowa School Choice

January 17, 2023

The Prairie City Monroe district school board invited state senator Ken Rozenboom to their board meeting Monday, January 16th to discuss his support of the “school choice” bill currently making its way through Iowa legislature. The superintendent started off asking some tough questions of the Senator, who is the chair of the education committee. As he puts it, “school choice is in his lap”. He’s worked on different forms of the bill for seven years.

A lot was said about numbers and funds, percentages and estimates. The superintendent fully expects that the bill will negatively impact the PCM school district, and that the funds to cover the loss will have to come out of school programs. The senator was unconvinced… or at least he couldn’t admit he was convinced. I think the super is right.

But…

None of that matters. None of the numbers or dollars or percentages or estimates actually matter. At one point the Senator said something along the lines of asking us if we knew what was happening in some of the public schools in the state, calling out Lynnville-Sully in particular, where he claimed middle school girls were forced to shower with transgender students1. He also called out another school who spent a week on Black Lives Matter2, with a look of pure disgust. He said that public schools brought this on themselves.

He brushed off any and all criticism as being union talking points, with a smug grin and wave of his hand.

After the meeting was over, a few of us met with him in the hall to ask a few more questions. Again he said that you’d have to have your head in the sand to not know what was going on in public schools “out there”. I raised my hand and said “my heads in the sand. Seriously, I don’t read the news, I have no idea what’s going on.” I was hoping that he would explain or give a few more examples, but somehow the conversation moved on. I suspect he had no concrete evedence to provide.

What I got from the Senator was that it’s not really about kids, it’s not about education, it’s about the right-wing culture war. That’s what he said, but also what he couldn’t really admit. He said it’s not the republicans, it’s the left, it’s the teachers who are shoving this progressive ideology down our throats. He also said that the median household income across the state of Iowa is $61,000, he followed that up by saying that the median teacher income in Iowa is $61,000. He asked me if I knew how many teachers there were out there making $100,000 a year. I asked him if he knew how many there were in PCM, and said zero.

I was flabbergasted at the number of times the Senator would say something, and immediately claim that he didn’t say it.

In the Senators mind, schools are funded appropriately, or possibly overfunded. Public school teachers are overpaid and actively poisoning our kids minds. He said that the social contract has been irrevocably broken, that we can’t put the genie back in the bottle. I’m guessing what he means by that and the “school choice” program is that public schools should be relegated to serving only those whom the private schools turn away. Student in need, students with disabilities or behaviors, students with non-Christian parents. Creating an evermore stratified society of haves and have-nots.

We ended the night with me trying to explain to the Senator that what I heard was that instead of investing in our public schools, we are taking funds away from the schools. Even though we spent the past hour talking about exactly that, the Senator claimed that wasn’t at all accurate. We shook hands and parted ways.

  1. I could find no mention of this online, including in the published school board minutes. Of course, I’m not on Facebook. It’s unknown where the Senator gets his information. 

  2. I assume he’s referring to the 2021 Ames Community School District choosing to spend the first week of Black History Month featuring Black Lives Matter. 


GPG Signing Git Commits

June 9, 2022

On my way towards completing another project I needed to setup gpg public key infrastructure. There are many tutorials and explanations about gpg on the web, so I won’t try to explain what it is here. My goal is to simply record how I went about setting it up for myself to securely sign my Git commits.

Most everything here I gathered from this tutorial on dev.to, but since I’m sure I’ll never be able to find it again after today, I’m going to document it here.

First, install gpg with Homebrew:

brew install gpg

Next, generate a new Ed25519 key:

gpg --full-generate-key --expert

We pick option (9) for the first prompt, Elliptic Curve Cryptography, and option (1) for the second, Curve 25519. Pick the defaults for the rest of the prompts, giving the key a descriptive name.

Once finished you should be able to see your key by running:

gpg --list-keys --keyid-format short

The tutorial recommends using a second subkey generated from the first key to actually do the signing. So, we edit the master key by running:

gpg --expert --edit-key XXXXXXX

Replacing XXXXX with the ID of your newly generated key. Once in the gpg command line, enter addkey, and again select ECC and Curve 25519 for the options. Finally, enter save to save the key and exit the command line.

Now when we run gpg --list-keys --keyid-format short we should be able to see a second key listed with the designation [S] after it. The ID will look similar to this:

sub   ed25519/599D272D 2021-01-02 [S]

We will need the part after ed25519/, in this case 599D272D. Add that to your global Git configuration file by running:

git config --global user.signingkey 599D272D

If you’d like git to sign every commit, you can add this to your config file:

git config --global commit.gpgsign true

Otherwise, pass the -S flag to your git command to sign individual commits. I’d never remember to do that, so I just sign all of them.

Make sure that gpg is unlocked and ready to use by running:

echo "test"  | gpg --clearsign

If that fails, run export GPG_TTY=$(tty) and try again. You should be prompted to unlock GPG with the passphrase set during creation of the key. Enter the export command in your ~/.zshrc to fix this issue.

Finally, Github has a simple way to add gpg keys, but first we’ll need to export the public key:

gpg --armor --export 599D272D

Copy the entire output of that command and enter it into the Github console under Settings, “SSH and GPG keys”, and click on “New GPG key”. Once that’s finished, you should start seeing nice green “Verified” icons next to your commits.


Why BBEdit?

April 27, 2022

I spend a lot of time in the terminal, but I spend an equal amount of time in my text editor. My main requirements for a text editor is that it be equally fast and powerful, but not try to do much more than just be a text editor. I have my terminal a quick command-tab away, so its very easy to jump back and forth, I don’t need a terminal built into my text editor. I also need my text editor to be completely accurate. What I see has to be what is written to the file, no “hidden syntax”. That’s why I prefer BBEdit. BBEdit has powerful text editing features, but it’s not overwhelming. It’s not trying to be your entire operating system.

In fact, BBEdit fits perfectly in the macOS environment. I used to be a dedicated vim user, but over time the context switching between the different macOS applications and vim became distracting. One of the great things about macOS is that once you learn the basic keyboard combos they apply almost everywhere, unless the application you are using is not a good Mac citizen. Vim has it’s own set of keyboard combos, endlessly configurable and expandable. I started forgetting my own shortcuts. BBEdit uses the macOS standard keyboard combos, many inherited from emacs, so if you learn how to navigate text in TextEdit you can apply those same shortcuts to BBEdit.

But, BBEdit is also powerful. You can include custom text snippets for autocompletion, run scripts to manipulate text, and use regular expressions for detailed search and replace operations. With the latest release you can also setup a language server for more powerful syntax checking and completion. BBEdit has been in active development for 30 years, and in that time the developer, Rich Siegel has continuously improved it and kept it up to date with the ever-changing architectures of macOS. BBEdit feels just as much at home on my M1 MacBook Pro as it did on the Macintosh of the 90’s.

BBEdit hits the right balance of power and simplicity for my workflow. It’s fast, reliable, and fits perfectly in the Mac environment. For as long as Rich is developing it, BBEdit will be in my Dock. I don’t know what will happen when he decides to retire, but I’m hoping that decision is many, many years away.


My 2022 Mac Setup

January 5, 2022

Starting its fifth year on my desk, my 2017 iMac 5k is still going strong, but starting to show it’s age. There’s still nothing that I can’t do with it, but after looking at my wife’s M1 MacBook Air I can’t help but be just a little jealous of how fast that little machine is. I’ve yet to come across any computer that offers the same value as the iMac, especially when you factor in how good this 27” retina screen is. I’m tempted by both the current 24” M1 iMac, and the new 16” M1 Max MacBook Pro, but the rumor mill is talking about bigger, master iMacs and colorful M2 MacBook Airs, either of which might actually make the most sense for my desk. I want to see the entire lineup before committing. The good thing is that no matter which machine I choose, it’s going to be a major speed improvement from where I am now.

On the software side I’m in a bit of a state of flux. Old standards are now no longer given, but because I’ve been using them for so long I’m reluctant to step away. Apple’s Reminders has gotten good enough for most things, but after a brief experiment over the holidays I found that I was letting things slip through the cracks, and headed back to OmniFocus. OmniFocus, for all its complexity, feels like home.

1Password’s upcoming switch to Electron has me concerned, but after some consideration I wonder how much of my concern is practical and how much of it is philosophical. Ideally, I would have preferred if 1P 8 was an evolution of 1P 7, instead of an entirely different application, but at the end of the day I’m still going to use it for work, and I still have a family subscription that I get through my Eero subscription, so I’ll keep it around. And if I’m going to have it anyway… why not keep using it? The flip side of that coin is that when using Safari the built-in password manager is seamless, far simpler than 1Password, even for one-time codes. But, do I really want all my passwords only available in Safari? Sometimes I might want to use Firefox. This is all still up in the air.

For file management, I’ve still got DEVONthink, but it’s another one where I’m not sure I’ll keep it. Again, ideally, iCloud Drive would be encrypted end-to-end so I could safely trust that I could put sensitive work data on it and not be putting my career at risk, but that’s not the world we live in. Without DEVONthink I need to keep all my files on my local machine, which is normally fine, but every now and then I want to get to something while I’m out running around, and DEVONthink to Go is the only secure solution. I’ve heard rumors that encryption might be coming, but until it’s here I think I’m going to stick with DT.

Keyboard Maestro and Hazel are both on the chopping block. I still have both installed, but neither are running. For Keyboard Maestro, I honestly can’t keep extra keyboard shortcuts in my head, and I’ve never found that many uses for the application. For a long time the only thing I’d use it for is text expansion, and even then only for the date stamp, i.e. 2022-01-05_. If there comes a time when iCloud Drive is encrypted, I’ll probably turn Hazel back on for automatic file management. But given that everything now gets dumped into DEVONthink, all my files are organized by the apps AI and I have less to set up and maintain. And if I’m honest, I’ve always had problems with Hazel’s pattern matching, sometimes it would inexplicably not match a pattern in a file I could easily find while searching with Spotlight. Most of Keyboard Maestro’s automation features that I did use I’ve moved into Shortcuts. I’m looking forward to Hazel-like functionality being built into macOS.

Other than that, I’m happy with Day One, I’ve increased my use of the Apple Notes app quite a bit, I love NetNewsWire (although I’ve decreased the number of sites I’m subscribed to for less, but better, content), MindNode is a fantastic mind-mapping app, and I keep all my mail in Mail. The biggest new addition to my setup is Parallels for running Windows 10. I’ve found I need this for teaching the Microsoft Office course at the local community college. Of course, this is another area where I’m unsure what I’ll do when I finally do update to an M-series Mac, but I’ll cross that bridge when I come to it.


The Reminders Experiment

January 3, 2022

Over the past month I’ve been experimenting with cutting down on the number of applications I use on my Mac. One in particular I though was going to stick was moving my task management system over from OmniFocus to Reminders. I enjoyed the everywhere integration of Reminders, like how Siri would include any tasks I had scheduled for the day when I asked my HomePod “What’s my update?”. Unfortunately, when I sat down at the end of the holiday break to think about the projects I had going at work, and how to schedule them out for the upcoming week, I realized I needed to open OmniFocus, and once it was open, I knew the experiment was over.

I’ve been using the GTD system for so long now it’s a part of how I think. OmniFocus was built around that system. It’s the only to-do app that I’m aware of that does things like defer dates and built-in weekly reviews. It might be a deep and complex app, but I’ve got it customized to my liking. I know which parts of it I use (perspectives) and which parts I never touch (flagged tasks).

The long and the short of it is that I’ve got a complicated job as a senior devops engineer, I teach two courses at the local community college at night, I’ve got a family, and I’ve got a home and vehicles to take care of. To be able to focus, to be there for my family, for my work, I need a system that I can trust. For me, OmniFocus is the foundation of that system. I’m not excited about the new direction of their mobile app, but I can live with it. I imagine that OmniFocus and I are going to be together for a long time.


The Subtle Art of Snowblowing

January 1, 2022

If you are fortunate enough to live in a home with a driveway, and fortunate enough to live in a region that gets a lot of snow, you are already familiar with the seasonal chore of snowblowing1. It is currently seven degrees Fahrenheit outside, and the weather forecast calls for five to six inches in snow today, which means that soon enough I’ll bundle up and head out to take part.

Snowblowing gives me a lot of time to think, and over the years I’ve developed a system for keeping the driveway in tip-top condition. There are a few rules, or, more likely “best practices”, to keep in mind when considering next steps while looking at a fresh blanket of snow. You must keep in mind the current and forecasted weather, your schedule and the schedule of anyone who lives with you, the state of your machinery and current preparation level, and finally the pattern you’ll walk when clearing the driveway and sidewalk. Importantly, grab your hot beverage of choice and enjoy the magical beauty of snowfall.

You’ll want to wait for the current storm to be over, don’t start snowblowing when it’s snowing unless you have no choice. Ideally there will be a nice break after the storm. The sky clears and the sun comes out. This is when the city will send out their plows to clear the roads, and it’s best if you can wait till after the plows go by to start your driveway. It’s maddening when the plow goes by your home and shoves a giant pile of slushy mess at the bottom of your freshly snowblown driveway. If you can wait till they are done, you can get it all done at once.

However, don’t wait long! Snow is normally easiest to blow when it is fresh powder. Depending on how much snow fell, and the arrangement of your driveway, you are in danger of the snow melting enough to turn to slush, which is much harder to move.

The number one most important thing to keep in mind is to clear the driveway before anyone drives on it. This is why you need to consider household scheduling. If anyone needs to leave or arrive before you get a chance to snowblow, the weight of the vehicle driving will pack the snow down tight. Once packed down like this, the snowblower will ride right over the tracks, barely scraping the top instead of pushing the snow out. Then, even if you’ve done a good job with the rest of the driveway, you’ll have two hard-packed treads of snow going across it that will (1) look bad, and (2) turn into a slipping hazard. When the sun comes out, and if your driveway gets a full day’s worth of direct sunlight, even if it’s below freezing, the sun will melt the snow just enough for it to re-freeze and turn to ice. Once this happens it is difficult to get off the driveway, and it’s likely those treads will be there for the remainder of the season.

When the time is right to head outside, you’ll learn quickly if you are properly prepared. Snowblowing is cold work, start with the right clothes. I’ve found that wearing a base layer of cold-weather gear helps tremendously, as does choosing the right pants. I try to avoid jeans unless they are flannel-lined, and normally go with a pair of hiking pants from Eddie Bower. These pants provide a layer of protection from the snow melting in, dry quickly, and when paired with the base-layer gear are warm enough. I wear a base layer shirt, a t-shirt, and a flannel, and then an Eddie Bower puffy jacket. I top it off with a North Face winter hat, and a pair of insulated leather work gloves. Of all my cold weather gear, I’m currently most unhappy with the gloves, my fingers tend to start freezing after a half-hour or so outside, so I’ll need to replace them with something better soon. Finally, wear thick wool socks if you have them, and good boots. Sneakers will work in a pinch, but I’d double-up on the socks, otherwise your toes will get very cold very quickly.

Next, hopefully you’ve read the weather and have started up your snowblower once this season, checked the oil, and made sure you’ve got a full tank along with a spare gallon of gas or two. If you start to snowblow the driveway and run out of gas, or, worse, if you have a bad snowblower that won’t start, all the other preparation and scheduling you’ve done won’t matter. Like a Scout, be prepared.

Speaking of snowblowers, my advice for equipment is the same as my advice for computers. Buy the best you can afford. This is not an area to skimp. True, it might be hard to look at it during the other ten months out of the year when it’s sitting in your shed or garage taking up space, but you’ll be thankful you have it those few times per year you need it. When looking at purchasing equipment, remember the old adage, “Buy nice, or buy twice.” You really want that snowblower to start up the first time you pull the cord.

In the same vein, purchase the best snow shovel you can afford. Even though you’ve got a nice snowblower, you’ll need a shovel for detail work, porches, and light snowfalls that don’t warrant getting out the powered equipment. Avoid the cheap plastic shovels, and avoid shovels with a weird bend in the handle. What you want is a wide metal shovel with a good blade on the edge and a sturdy, straight handle. That will give you the most control over the shovel and let you get the most work done with it. I’ve always found the bent handle shovels to be awkward to use.

Once you’ve found the right weather at the right time, you are properly dressed, and your equipment is prepared and ready, it’s time to start blowing snow. The way you go about this is part science and part art. You have to read the wind, the weight of the snow, and the power of your snowblower. You need to keep in mind how you want the driveway to look afterwards. Ideally, you move enough snow off the driveway that it’s bare concrete underneath, or close enough to bare that the sun will melt what’s left off and leave it clean. You want clean straight lines delineating the edges of your driveway where your lawn starts. If you can’t get to clear concrete, you’ll want your driveway to have just the barest layer of snow left on it, and have that snow reflect the plow tracks of your snowblower. Again, long straight lines are best. Avoid mixing vertical and horizontal patterns, you’ll want the entire driveway to look like it was all done at once.

The pattern you use to clear the driveway will be somewhat dependent on the weather, but I’ve found that normally starting in the center of the driveway and blowing snow to the left and right of it and working out towards the edges works best.

You could start at one edge and work your way all the way over to the other edge. If the wind is strong in one direction crossways this is probably the best bet.

You’ll need room to turn around, so I’ve found that one or two paths right at the top of the driveway parallel to the road gives me plenty of room. However, avoid trying to snowblow the entire driveway this way, you’ll wind up with a face full of snow, blowing snow all over your freshly blown driveway, and worse, blowing snow into the road. Don’t blow snow into the road, that’s poor form.

Finally, once the driveway is cleared you may be tempted to put salt on it to prevent icing. Don’t. Salt will damage your concrete, and stops working after it gets below ten degrees anyway. If icing is a safety issue, it’s better to put down a chemical deicing agent. I’ve seen sand recommended, but I’ve never seen anyone using it. Overall, it’s best to put the work in with the snowblower and shovel to clear all the snow off the driveway before it turns to ice, and let the sun take care of the rest of it. That way you don’t have to worry about slipping.

Some see snowblowing as a chore, I see it as a rare opportunity to continue perfecting the craft. In a world where most work happens in front of a screen, it’s good to be able to physically accomplish something you can be proud of. And a well cared for driveway, clean and cleared of snow and ice after a storm, is definitely something to be proud of.

  1. Apparently “snowblowing” is not a proper English word. I don’t care. It should be. It’s what I do when I clear my driveway of snow using the snowblower.