Morning Links

Tackle complex reasoning tasks with Mistral Large, now available on Amazon Bedrock

Calculate the difference in payment dates between the two customers whose payment amounts are closest to each other in the given dataset:

‘{ “transaction_id”:{“0”:“T1001”,“1”:“T1002”,“2”:“T1003”,“3”:“T1004”,“4”:“T1005”}, “customer_id”:{“0”:“C001”,“1”:“C002”,“2”:“C003”,“3”:“C002”,“4”:“C001”}, “payment_amount”:{“0”:125.5,“1”:89.99,“2”:120.0,“3”:54.3,“4”:210.2}, “payment_date”:{“0”:“2021-10-05”,“1”:“2021-10-06”,“2”:“2021-10-07”,“3”:“2021-10-05”,“4”:“2021-10-08”}, “payment_status”:{“0”:“Paid”,“1”:“Unpaid”,“2”:“Paid”,“3”:“Paid”,“4”:“Pending”} }’

Output

To find the difference in payment dates between the two customers whose payment amounts are closest to each other, we first need to identify the two closest payment amounts and then find their respective payment dates.

The payment amounts are: 125.5, 89.99, 120.0, 54.3, 210.2

The two closest payment amounts are 125.5 and 120.0. The payment date for the payment amount 125.5 is ‘2021-10-05’ and for the payment amount 120.0 is ‘2021-10-07’.

The difference in payment dates can be calculated as follows:

Date difference = |‘2021-10-07’ - ‘2021-10-05’| = 2 days

So, the difference in payment dates between the two customers whose payment amounts are closest to each other is 2 days.

AWS does well when taking complex systems and packaging them up for consumers. I don’t think we’ve seen the full impact of AI yet. Not even close.

Casey Liss - Every Failure is an Opportunity

So, I called Sonos support this morning, and braced for a fight — I’m about six months out of the warranty coverage.

Thirty minutes later, without any sort of arguing nor complaining, I have a RMA, and I’ll get a new Roam in a couple of weeks.

Reminds me of a time when I took my newly broken iPod Mini into an Apple Store. I showed it to the guy at the counter, he said he couldn’t say for sure if it was accidental or manufacturing defect, so he went in the back and brought me a brand new one. I’ve been a fan since. Sounds like Sonos learned the right lesson.

New Spatial Personas

A pair of links for this one, one from each point of view:

I was able to invite my friend Stephen Hackett’s Persona over to my house for a play date and we were able to chat face to face in a way that just seemed more natural than talking to a persona in a box. It felt more like it was him.

After our excellent ideation session, we played a round of Battleship in the excellent Game Room on Apple Arcade. After I sunk all of his ships, we watched a few minutes of For All Mankind in a couple of immersive environments.

I had always wondered about SharePlay, prior to the Vision Pro. Why would I want to keep my phone up looking at someone on FaceTime while we both watched a movie? Never made sense. Now in the light of immersive virtual reality, it makes perfect sense.

The one about the web developer job market

We have the worst job environment for tech in over two decades and that’s with the “AI” bubble in full force. If that bubble pops hard before the job market recovers, the repercussions to the tech industry will likely eclipse the dot-com crash.

Well, this gave me a lot of food for thought.

xz Utils Backdoor

I simply don’t believe this was the only attempt to slip a backdoor into a critical piece of Internet software, either closed source or open source. Given how lucky we were to detect this one, I believe this kind of operation has been successful in the past. We simply have to stop building our critical national infrastructure on top of random software libraries managed by lone unpaid distracted—or worse—individuals.

Not great news the past couple days from the software industry.

Loading and Indexing SQLite

What a difference a couple of lines of code can make.

I recognize that databases have always been a weak point for me, so I’ve been trying to correct that lately. I have a lot of experience with management of the database engines, failover, filesystems, and networking, but too little working with the internals of the databases themselves. Early this morning I decided I didn’t know enough about how database indexes worked. So I did some reading, got to the point where I had a good mental model for them, and decided I’d like to do some testing myself. I figured 40 million records was a nice round number, so I used fakedata to generate 40 million SQL inserts that looked something like this:

INSERT INTO contacts (name,email,country) VALUES ("Milo Morris","pmeissner@test.tienda","Italy");INSERT INTO contacts (name,email,country) VALUES ("Hosea Burgess","kolage@example.walmart","Dominica");INSERT INTO contacts (name,email,country) VALUES ("Adaline Frank","shaneIxD@example.talk","Slovenia");

I saved this as fakedata.sql and piped it into sqlite3 and figured I’d just let it run in the background. After about six hours I realized this was taking a ridiculously long time, and I estimated I’d only loaded about a quarter of the data. I believe that’s because SQLite was treating each INSERT as a separate transaction.

A transaction in SQLite is a unit of work. SQLite ensures that the write to the database is Atomic, Consistent, Isolated, and Durable, which means that for each of the 40 million lines I was piping into sqlite3, the engine was ensuring that every line was fully committed to the database before moving on to the next line. That’s a lot of work for a very, very small amount of data. So, I did some more reading and found one recommendation of explicitly wrapping the entire load into a single transaction, so my file now looked like:

BEGIN TRANSACTION;INSERT INTO contacts (name,email,country) VALUES ("Milo Morris","pmeissner@test.tienda","Italy");INSERT INTO contacts (name,email,country) VALUES ("Hosea Burgess","kolage@example.walmart","Dominica");INSERT INTO contacts (name,email,country) VALUES ("Adaline Frank","shaneIxD@example.talk","Slovenia");COMMIT;

I set a timer and ran the import again:

➜  var time cat fakedata.sql| sqlite3 test.dbcat fakedata.sql  0.07s user 0.90s system 1% cpu 1:13.66 totalsqlite3 test.db  70.81s user 2.19s system 98% cpu 1:13.79 total

So, that went from 6+ hours to about 71 seconds. And I imagine if I did some more optimization (possibly using the Write Ahead Log?) I might be able to get that import faster still. But a little over a minute is good enough for some local curiosity testing.

Indexes

So… back to indexes.

Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure that holds the field values and a pointer to the record it relates to. Once the index is created it is sorted. This allows binary searches to be performed on the new data structure.

One good analogy is the index of a physical book. Imagine that a book has ten chapters and each chapter has 100 pages. Now imagine you’d like to find all instances of the word “continuum” in the book. If the book doesn’t have an index, you’d have to read through every page in every chapter to find the word.

However, if the book is already indexed, you can find the word in the alphabetical list, which will then have a pointer to the page numbers where the word can be found.

The downside to the index is that it does take additional space. In the book analogy, while the book itself is 1000 pages, we’d need another ten or so for the index, bringing up the total size to 1010 pages. Same with a database, the additional index data structure requires more space to hold both the original data field being indexed, and a small (4-byte, for example) pointer to the record.

Oh, and the results of creating the index are below.

SELECT * from contacts WHERE name is 'Hank Perry';Run Time: real 2.124 user 1.771679 sys 0.322396CREATE INDEX IF NOT EXISTS name_index on contacts (name);Run Time: real 22.129 user 16.048308 sys 2.274184SELECT * from contacts WHERE name is 'Hank Perry';Run Time: real 0.003 user 0.001287 sys 0.001598

That’s a massive improvement. And now I know a little more than I did.

The Perfect ZSH Config

If you spend all day in the terminal like I do, you come to appreciate it’s speed and efficiency. I often find myself in Terminal for mundane tasks like navigating to a folder and opening a file; it’s just faster to type where I want to go than it is to click in the Finder, scan the folders for the one I want, double-click that one, scan again… small improvements to the speed of my work build up over time. The speed is increased exponentially with the correct configuration for your shell, in my case, zsh.

zsh is powerful and flexible, which means that it can also be intimidating to try to configure yourself. Doubly-so when there are multiple ‘frameworks’ available that will do the bulk of the configuration for you. I used Oh My Zsh for years, but I recently abandoned it in favor of maintaining my own configuration using only the settings that I need for the perfect configuration for my use.

I’ve split my configuration into five files:

  • apple.zsh-theme
  • zshenv
  • zshrc
  • zsh_alias
  • zsh_functions

I have all five files in a dotfiles git repository, pushed to a private Github repository.

The zshenv file is read first by zsh when starting a new shell. It contains a collection of environmental variables I’ve set, mainly for development. For example:

export PIP_REQUIRE_VIRTUALENV=trueexport PIP_DOWNLOAD_CACHE=$HOME/.pip/cacheexport VIRTUALENV_DISTRIBUTE=true

The next file is zshrc, which contains the main bulk of the configurations. My file is 113 lines, so let’s take it a section at a time.

source /Users/jonathanbuys/Unix/etc/dotfiles/apple.zsh-themesource /Users/jonathanbuys/Unix/etc/dotfiles/zsh_aliassource /Users/jonathanbuys/Unix/etc/dotfiles/zsh_functions

The first thing I do is source the other three files. The first is my prompt, which is cribbed entirely from Oh My Zsh. It’s nothing fancy, but I consider it to be elegant and functional. I don’t like the massive multi-line prompts. I find them to be far too distracting for what they are supposed to do.

My prompt looks like this:

 ~/Unix/etc/dotfiles/ [master*] 

It gives me my current path, what git branch I’ve checked out, and if that branch has been modified since the last commit.

The next two files, as their names suggest, contain aliases and functions. I have three functions and 16 aliases. I won’t go into each of them here, as they are fairly mundane and only specific for my setup. The three functions are to print the current path of the open Finder window, to use Quicklook to preview a file, and to generate a uuid string.

The next few lines establish some basic settings.

autoload -U colors && colorsautoload -U zmvsetopt AUTO_CDsetopt NOCLOBBERsetopt SHARE_HISTORYsetopt HIST_IGNORE_DUPSsetopt HIST_IGNORE_SPACE

The autoload lines setup zsh to use pretty colors, and to enable the extremely useful zmv command for batch file renaming. The interesting parts of the setopt settings are the ones dealing with command history. These three commands allow the sharing of command line history between open windows or tabs. So if I have multiple Terminal windows open, I can browse the history of both from either window. I find myself thinking that the environment is broken if this is not present.

Next, I setup some bindings:

  # start typing + [Up-Arrow] - fuzzy find history forward  bindkey '^[[A' up-line-or-search  bindkey '^[[B' down-line-or-search    # Use option as meta  bindkey "^[f" forward-word  bindkey "^[b" backward-word    # Use option+backspace to delete words  x-bash-backward-kill-word(){      WORDCHARS='' zle backward-kill-word    }  zle -N x-bash-backward-kill-word  bindkey '^W' x-bash-backward-kill-word    x-backward-kill-word(){      WORDCHARS='*?_-[]~\!#$%^(){}<>|`@#$%^*()+:?' zle backward-kill-word  }  zle -N x-backward-kill-word  bindkey '\e^?' x-backward-kill-word

These settings let me use the arrow keys to browse history, and to use option + arrow keys to move one word at a time through the current command, or to use option + delete to delete one word at a time. Incredibly useful, use it all the time. Importantly, this also lets me do incremental searching through my command history with the arrow keys. So, if I type aws, then arrow up, I can browse all of my previous commands that start with aws. And when you have to remember commands that have 15 arguments, this is absolutely invaluable.

The next section has to do with autocompletion.

# Better autocomplete for file namesWORDCHARS=''unsetopt menu_complete   # do not autoselect the first completion entryunsetopt flowcontrolsetopt auto_menu         # show completion menu on successive tab presssetopt complete_in_wordsetopt always_to_endzstyle ':completion:*:*:*:*:*' menu select# case insensitive (all), partial-word and substring completionif [[ "$CASE_SENSITIVE" = true ]]; then  zstyle ':completion:*' matcher-list 'r:|=*' 'l:|=* r:|=*'else  if [[ "$HYPHEN_INSENSITIVE" = true ]]; then    zstyle ':completion:*' matcher-list 'm:{[:lower:][:upper:]-_}={[:upper:][:lower:]_-}' 'r:|=*' 'l:|=* r:|=*'  else    zstyle ':completion:*' matcher-list 'm:{[:lower:][:upper:]}={[:upper:][:lower:]}' 'r:|=*' 'l:|=* r:|=*'  fifiunset CASE_SENSITIVE HYPHEN_INSENSITIVE# Complete . and .. special directorieszstyle ':completion:*' special-dirs truezstyle ':completion:*' list-colors ''zstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#) ([0-9a-z-]#)*=01;34=0=01'zstyle ':completion:*:*:*:*:processes' command "ps -u $USERNAME -o pid,user,comm -w -w"# disable named-directories autocompletionzstyle ':completion:*:cd:*' tag-order local-directories directory-stack path-directories# Use caching so that commands like apt and dpkg complete are useablezstyle ':completion:*' use-cache yeszstyle ':completion:*' cache-path $ZSH_CACHE_DIRzstyle ':completion:*:*:*:users' ignored-patterns \        adm amanda apache at avahi avahi-autoipd beaglidx bin cacti canna \        clamav daemon dbus distcache dnsmasq dovecot fax ftp games gdm \        gkrellmd gopher hacluster haldaemon halt hsqldb ident junkbust kdm \        ldap lp mail mailman mailnull man messagebus  mldonkey mysql nagios \        named netdump news nfsnobody nobody nscd ntp nut nx obsrun openvpn \        operator pcap polkitd postfix postgres privoxy pulse pvm quagga radvd \        rpc rpcuser rpm rtkit scard shutdown squid sshd statd svn sync tftp \        usbmux uucp vcsa wwwrun xfs '_*'if [[ ${COMPLETION_WAITING_DOTS:-false} != false ]]; then  expand-or-complete-with-dots() {    # use $COMPLETION_WAITING_DOTS either as toggle or as the sequence to show    [[ $COMPLETION_WAITING_DOTS = true ]] && COMPLETION_WAITING_DOTS="%F{red}…%f"    # turn off line wrapping and print prompt-expanded "dot" sequence    printf '\e[?7l%s\e[?7h' "${(%)COMPLETION_WAITING_DOTS}"    zle expand-or-complete    zle redisplay  }  zle -N expand-or-complete-with-dots  # Set the function as the default tab completion widget  bindkey -M emacs "^I" expand-or-complete-with-dots  bindkey -M viins "^I" expand-or-complete-with-dots  bindkey -M vicmd "^I" expand-or-complete-with-dotsfi# automatically load bash completion functionsautoload -U +X bashcompinit && bashcompinit

That’s a long section, but in a nutshell this lets me type one character, then hit tab, and be offered a menu of all the possible completions of that character. It is case-insensitive, so b would match both boring.txt and Baseball.txt. I can continue to hit tab to cycle through the options, and hit enter when I’ve found the one I want.

The last section sources a few other files:

[ -f ~/.fzf.zsh ] && source ~/.fzf.zsh[ -f "/Users/jonathanbuys/.ghcup/env" ] && source "/Users/jonathanbuys/.ghcup/env" # ghcup-env[ -s "/Users/jonathanbuys/.bun/_bun" ] && source "/Users/jonathanbuys/.bun/_bun"source /Users/jonathanbuys/Unix/src/zsh-autosuggestions/zsh-autosuggestions.zshsource /Users/jonathanbuys/Unix/src/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh

If I’m experimenting with Haskell, I’d like to load the ghcup-env variables. If I have bun installed (a way, way faster npm), than use that. The final two sources are for even more enhanced autosuggestions and command line syntax highlighting. So, typos or commands that don’t exist will be red, good commands where zsh can find the executable will be green. The autosuggestions take commands from my history and suggest them, I can type right-arrow to accept the suggestion, or keep typing to ignore it.

Taken together, I’ve been able to remove Oh My Zsh, but keep all of the functionality. My shell configuration is constantly evolving as I find ways to make things faster and more efficient. I don’t consider myself a command line zealot, but I do appreciate how this setup gets out of my way and helps me work as fast as I can think.


p.s. A lot of this configuration was taken from other sources shared around the internet, as well as the zsh documentation. I regret that I haven’t kept references to the origins of some of these configs. If I can find the links I’ll post them here.

Future Work and AI

I’ve been trying to wrap my small monkey brain around what ChatGPT will mean in the long run. I’m going to try to think this through here. In many ways the advances we’ve seen in AI this past year perpetuate the automation trend that’s existed since… well, since humans started creating technology. I’ve seen arguments that seem to be on two ends of a spectrum, that the AI is often wrong and unreliable, and we shouldn’t use it for anything important, to AI is so good that it’s going to put us all out of jobs. As with most truths, I think the reality is somewhere in between.

It’s my opinion that jobs that AI can replace, it probably will replace a lot of. But not all. Referring back to our discussion about the current state of Apple news sites, if the site is a content farm pumping out low-value articles for hit counts and views, I can see AI handling that. If your site is well thought out opinions and reviews about things around the Apple ecosystem, that I think will be safe. Because it’s the person’s opinion that gives the site value.

For more enterprise-y jobs, I could see fewer low and mid-level developers. Fewer managers, fewer secretaries, fewer creatives. Not all gone, but certainly less than before. If your job is to create stock photos and put together slide shows, you might want to expand your skill set a bit.

I think… the kind of jobs that will survive are the type that bring real value. The kind of value that can’t be replicated by a computer. Not just the generation of some text or code, but coming up with the why. What needs to be made, and why does it need to be made?

Maybe AI will help free us up to concentrate on solving really hard problems. Poverty, clean water, famine, climate change. Then again, maybe it’ll make things worse. I suppose in the end that’s up to us.

Solar-powered system offers a route to inexpensive desalination - MIT News

Now, a team of researchers at MIT and in China has come up with a solution to the problem of salt accumulation — and in the process developed a desalination system that is both more efficient and less expensive than previous solar desalination methods. The process could also be used to treat contaminated wastewater or to generate steam for sterilizing medical instruments, all without requiring any power source other than sunlight itself.

That’s good news, more of this please.

Link

Use One Big Server - Speculative Branches

We have all gotten so familiar with virtualization and abstractions between our software and the servers that run it. These days, “serverless” computing is all the rage, and even “bare metal” is a class of virtual machine. However, every piece of software runs on a server. Since we now live in a world of virtualization, most of these servers are a lot bigger and a lot cheaper than we actually think.

Link

Why Study Functional Programming?

Learning functional programming is an opportunity to discover a new way to represent programs, to approach problems, and to think about languages.

I’m interested in functional programming. It might be time soon to sit down and start wrapping my head around it.

Link

GPG Signing Git Commits

On my way towards completing another project I needed to setup gpg public key infrastructure. There are many tutorials and explanations about gpg on the web, so I won’t try to explain what it is here. My goal is to simply record how I went about setting it up for myself to securely sign my Git commits.

Most everything here I gathered from this tutorial on dev.to, but since I’m sure I’ll never be able to find it again after today, I’m going to document it here.

First, install gpg with Homebrew:

brew install gpg

Next, generate a new Ed25519 key:

gpg --full-generate-key --expert

We pick option (9) for the first prompt, Elliptic Curve Cryptography, and option (1) for the second, Curve 25519. Pick the defaults for the rest of the prompts, giving the key a descriptive name.

Once finished you should be able to see your key by running:

gpg --list-keys --keyid-format short

The tutorial recommends using a second subkey generated from the first key to actually do the signing. So, we edit the master key by running:

gpg --expert --edit-key XXXXXXX

Replacing XXXXX with the ID of your newly generated key. Once in the gpg command line, enter addkey, and again select ECC and Curve 25519 for the options. Finally, enter save to save the key and exit the command line.

Now when we run gpg --list-keys --keyid-format short we should be able to see a second key listed with the designation [S] after it. The ID will look similar to this:

sub   ed25519/599D272D 2021-01-02 [S]

We will need the part after ed25519/, in this case 599D272D. Add that to your global Git configuration file by running:

git config --global user.signingkey 599D272D

If you’d like git to sign every commit, you can add this to your config file:

git config --global commit.gpgsign true

Otherwise, pass the -S flag to your git command to sign individual commits. I’d never remember to do that, so I just sign all of them.

Make sure that gpg is unlocked and ready to use by running:

echo "test"  | gpg --clearsign

If that fails, run export GPG_TTY=$(tty) and try again. You should be prompted to unlock GPG with the passphrase set during creation of the key. Enter the export command in your ~/.zshrc to fix this issue.

Finally, Github has a simple way to add gpg keys, but first we’ll need to export the public key:

gpg --armor --export 599D272D

Copy the entire output of that command and enter it into the Github console under Settings, “SSH and GPG keys”, and click on “New GPG key”. Once that’s finished, you should start seeing nice green “Verified” icons next to your commits.

Why BBEdit?

I spend a lot of time in the terminal, but I spend an equal amount of time in my text editor. My main requirements for a text editor is that it be equally fast and powerful, but not try to do much more than just be a text editor. I have my terminal a quick command-tab away, so its very easy to jump back and forth, I don’t need a terminal built into my text editor. I also need my text editor to be completely accurate. What I see has to be what is written to the file, no “hidden syntax”. That’s why I prefer BBEdit. BBEdit has powerful text editing features, but it’s not overwhelming. It’s not trying to be your entire operating system.

In fact, BBEdit fits perfectly in the macOS environment. I used to be a dedicated vim user, but over time the context switching between the different macOS applications and vim became distracting. One of the great things about macOS is that once you learn the basic keyboard combos they apply almost everywhere, unless the application you are using is not a good Mac citizen. Vim has it’s own set of keyboard combos, endlessly configurable and expandable. I started forgetting my own shortcuts. BBEdit uses the macOS standard keyboard combos, many inherited from emacs, so if you learn how to navigate text in TextEdit you can apply those same shortcuts to BBEdit.

But, BBEdit is also powerful. You can include custom text snippets for autocompletion, run scripts to manipulate text, and use regular expressions for detailed search and replace operations. With the latest release you can also setup a language server for more powerful syntax checking and completion. BBEdit has been in active development for 30 years, and in that time the developer, Rich Siegel has continuously improved it and kept it up to date with the ever-changing architectures of macOS. BBEdit feels just as much at home on my M1 MacBook Pro as it did on the Macintosh of the 90’s.

BBEdit hits the right balance of power and simplicity for my workflow. It’s fast, reliable, and fits perfectly in the Mac environment. For as long as Rich is developing it, BBEdit will be in my Dock. I don’t know what will happen when he decides to retire, but I’m hoping that decision is many, many years away.

My 2022 Mac Setup

Starting its fifth year on my desk, my 2017 iMac 5k is still going strong, but starting to show it’s age. There’s still nothing that I can’t do with it, but after looking at my wife’s M1 MacBook Air I can’t help but be just a little jealous of how fast that little machine is. I’ve yet to come across any computer that offers the same value as the iMac, especially when you factor in how good this 27” retina screen is. I’m tempted by both the current 24” M1 iMac, and the new 16” M1 Max MacBook Pro, but the rumor mill is talking about bigger, master iMacs and colorful M2 MacBook Airs, either of which might actually make the most sense for my desk. I want to see the entire lineup before committing. The good thing is that no matter which machine I choose, it’s going to be a major speed improvement from where I am now.

On the software side I’m in a bit of a state of flux. Old standards are now no longer given, but because I’ve been using them for so long I’m reluctant to step away. Apple’s Reminders has gotten good enough for most things, but after a brief experiment over the holidays I found that I was letting things slip through the cracks, and headed back to OmniFocus. OmniFocus, for all its complexity, feels like home.

1Password’s upcoming switch to Electron has me concerned, but after some consideration I wonder how much of my concern is practical and how much of it is philosophical. Ideally, I would have preferred if 1P 8 was an evolution of 1P 7, instead of an entirely different application, but at the end of the day I’m still going to use it for work, and I still have a family subscription that I get through my Eero subscription, so I’ll keep it around. And if I’m going to have it anyway… why not keep using it? The flip side of that coin is that when using Safari the built-in password manager is seamless, far simpler than 1Password, even for one-time codes. But, do I really want all my passwords only available in Safari? Sometimes I might want to use Firefox. This is all still up in the air.

For file management, I’ve still got DEVONthink, but it’s another one where I’m not sure I’ll keep it. Again, ideally, iCloud Drive would be encrypted end-to-end so I could safely trust that I could put sensitive work data on it and not be putting my career at risk, but that’s not the world we live in. Without DEVONthink I need to keep all my files on my local machine, which is normally fine, but every now and then I want to get to something while I’m out running around, and DEVONthink to Go is the only secure solution. I’ve heard rumors that encryption might be coming, but until it’s here I think I’m going to stick with DT.

Keyboard Maestro and Hazel are both on the chopping block. I still have both installed, but neither are running. For Keyboard Maestro, I honestly can’t keep extra keyboard shortcuts in my head, and I’ve never found that many uses for the application. For a long time the only thing I’d use it for is text expansion, and even then only for the date stamp, i.e. 2022-01-05_. If there comes a time when iCloud Drive is encrypted, I’ll probably turn Hazel back on for automatic file management. But given that everything now gets dumped into DEVONthink, all my files are organized by the apps AI and I have less to set up and maintain. And if I’m honest, I’ve always had problems with Hazel’s pattern matching, sometimes it would inexplicably not match a pattern in a file I could easily find while searching with Spotlight. Most of Keyboard Maestro’s automation features that I did use I’ve moved into Shortcuts. I’m looking forward to Hazel-like functionality being built into macOS.

Other than that, I’m happy with Day One, I’ve increased my use of the Apple Notes app quite a bit, I love NetNewsWire (although I’ve decreased the number of sites I’m subscribed to for less, but better, content), MindNode is a fantastic mind-mapping app, and I keep all my mail in Mail. The biggest new addition to my setup is Parallels for running Windows 10. I’ve found I need this for teaching the Microsoft Office course at the local community college. Of course, this is another area where I’m unsure what I’ll do when I finally do update to an M-series Mac, but I’ll cross that bridge when I come to it.