jb… a weblog by Jonathan Buys

Writing about Jekyll

August 25, 2009

I’m writing an article for TAB about my new blogging engine, Jekyll. I’ve taken most of the reliance on the command line out of dealing with Jekyll on a day to day basis, and instead have a few Automator workflows in the scripts menu in the Mac menubar. It’s a great setup, I’m really enjoying it. I’m sure there will be quite a bit of enhancement yet to come, but my initial workflow looks like this:

  1. Click “New Blog Post”
  2. Write the article
  3. Click “Run Jekyll”
  4. Make sure everything worked using the local webrick web server.
  5. Click “Kill Jekyll”
  6. Click “Sync Site”

Here’s what I’ve got so far in the automator workflows:

New Blog Post

First, I run the “Ask for Text” action to get the name of the post. Then, I run this script:

NAME=`echo $1 | sed s/\ /-/g`
USERNAME=`whoami`
POSTNAME=`date "+%Y-%m-%d"-$NAME`
POST_FQN=/Users/$USERNAME/Sites/_posts/$POSTNAME.markdown
touch $POST_FQN
echo "---" >> $POST_FQN
echo "layout: post" >> $POST_FQN
echo "title: $1" >> $POST_FQN
echo "---" >> $POST_FQN
/usr/bin/mate $POST_FQN

Run Jekyll

First, I run this script:

USERNAME=`whoami`
cd /Users/$USERNAME/Sites
/usr/bin/jekyll > /dev/null
/usr/bin/jekyll --server  > /dev/null 2>&1 &
/usr/local/bin/growlnotify --appIcon Automator Jekyll is Done -m 'And there was much rejoicing.'
echo "http://localhost:4000"

Followed by the “New Safari Document” Automator action. This runs Jekyll which converts the blog post I just wrote in markdown syntax to html, updates the site navigation, starts the local web server and opens the site in Safari to preview.

Kill Jekyll

Since I start the local server in the last step, I need to kill it in this step. This action does just that.

PID=`ps -eaf | grep "jekyll --server" | grep -v grep | awk '{ print $2 }'`
kill $PID
/usr/local/bin/growlnotify --appIcon Automator Jekyll is Dead -m 'Long Live Jekyll.'

This is entered in as a shell script action, and is the only action in this workflow.

Sync Site

Once I’m certain everything looks good, I run the final Automator action to upload the site:

cd /Users/USERNAME/Sites/_site/
rsync -avz -e ssh . USERNAME@jonathanbuys.com:/home/USERNAME/jonathanbuys.com/ > /dev/null
/usr/local/bin/growlnotify --appIcon Automator Site Sync Complete -m 'Check it out.'

This is also a single Automator action workflow. You’ll notice that I use Growl to notify me that the script is finished. This is also not really necessary, but it’s fun anyway.

Like I said, there’s a lot of improvement yet to go, but I think it’s a solid start. I’m at a point now where I’m tempted to start writing a Wordpress import feature, which seems to be the only major piece missing from the Jekyll puzzle. I’m not sure what this would take just yet, but I’ve got a few ideas. I haven’t tried uploading any images or media yet, but since everything is static, I assume it would just be a matter of placing the image in a /images folder and embedding it in html. So far, I’m having a lot of fun, and that’s what blogging is really all about.


The Unix Love Affair

August 10, 2009

There’s been times when I’ve walked away from the command line, times when I’ve thought about doing something else for a living. There’s even been brief periods of time when I’ve flirted with Windows servers. However, I’ve always come back to Unix, in one form or another. Starting with Solaris, then OpenBSD, then every flavor of Linux under the sun, to AIX, and back to Linux. Unix is something that I understand, something that makes sense.

Back in ‘96 when I started in the tech field, I discovered that I have a knack for understanding technology. Back then it was HF receivers and transmitters, circuit flow and 9600 baud circuits. Now I’m binding dual gigabit NICs together for additional bandwidth and failover in Red Hat. The process, the flow of logic, and the basics of troubleshooting still remain the same.

To troubleshoot a system effectively, you need to do more than just follow a list of pre-defined steps. You need to understand the system, you need to know the deep internals of not only how it works, but why. In the past 13 years of working in technology, I’ve found that learning the why is vastly more valuable.

Which brings me back to why I love working with Unix systems again. I understand why they act the way that they do, I understand the nature of the behavior. I find the layout of the filesystem to be elegant, and a minimally configured system to be best. I know that there are a lot of problems with the FSH, and I know that it’s been mangled more than once, but still. In Unix, everything is configured with a text file somewhere, normally in /etc, but from time to time somewhere else. Everything is a file, which is why tools like lsof work so well.

Yes, Unix can be frustrating, and yes, there are things that other operating systems do better. It is far from perfect, and has many faults. But, in the end, there is so much more to love about Unix then there is to hate.


Slowly Evolving an IT System

July 18, 2009

We are going through a major migration at work, upgrading our four and a half year old IBM blades to brand spanking new HP BL460 G6’s. We run a web infrastructure, and the current plan is to put our F5’s, application servers, and databases in place, test them all out, and then take a downtime to swing IPs over and bring up the new system. It’s a great plan, it’s going to work perfectly, and we will have the least amount of downtime with this plan. Also… I hate it.

The reason I hate it has more to do with technical philosophy then with actual hard facts. I prefer a slow and steady evolution, a recognition that we are not putting in a static system, but a living organism who’s parts are made up of bits and silicone. What I’d like to do is put in the database servers first, then swing over the application servers, and then the F5, which is going to replace our external web servers and load balancers. One part at a time, and if we really did it right, we could do each part with very little downtime at all. However, I can see the point in putting in everything at once, you test the entire system from top to bottom, make sure it works, and when everyone is absolutely certain that all the parts work together, flip the switch and go live. But… then what.

What about six months down the road when we are ready to add capacity to the system, what about adding another database server, what about adding additional application servers to spread out the load, what about patches?

Operating systems are not something that you put into place and never touch again. IT systems made up of multiple servers should not be viewed as fragile, breakable things that should not be touched. We can’t set this system up and expect it to be the same three years from now when the lease on the hardware is up. God willing, it’s going to grow, flourish, change.

Our problems are less about technology, and more about our corporate culture.


Teach A Man To Fish

July 13, 2009

As a general rule, I really don’t like consultants. Not that I have anything against any of them personally, it’s just that as a whole, most consultants I’ve worked with are no better than our own engineers and administrators. The exception that proves this rule is our recent VMWare consultant, who was both knowledgeable and willing to teach. Bringing in an outside technical consultant to design, install, or configure a software system is admitting that not only do we as a company not know enough about the software, we don’t plan on learning enough about it either. Bringing in a consultant is investing in that companies knowledge, and not investing in our own.

It costs quite a bit of money to bring in a consultant, they do not come cheap. For one, if there is no one local you have to pay for travel and lodging. Most consultants charge by the hour, so you have billable time bringing them up to speed on what you new system is and what you are trying to accomplish with it so they can start helping you. If you are bringing in a consultant for an IBM product, you need to be prepared to sit on the phone with him and put in several PMRs.

I would rather spend the equivalent amount of money on sending employees to training than on a consultant. Once a consultant leaves after performing their task, the regular employees who maintain the system are on their own, and without the appropriate training they are often lost. When the consultant leaves, he takes all of his expertise with him. Expertise that was used to set up a system that he has no personal stake in, other than his reputation as a consultant, which may or may not matter depending on the relationship between the two companies. When you send employees to training on a new software product or technology, you are building that same expertise internally. Initially, the internal expertise will not be on the same level as that of the consultants, but over time as the employees administer the system that they built, their expertise grows deeper and stronger.

Teams that are experts on the systems they are in charge of can build on that system. They can recognize shortcomings and bottlenecks, and troubleshoot problems faster than on systems that they simply maintain. They know the internal architecture of the system, not only how it works, but why it works.

In the Navy, I was lucky enough to work for a Senior Chief who believed that we needed to be experts on the systems we managed, because once we were out to sea, no one was going to come out to help us. He sent us to training, or brought training in, two or three times a year, for one to two week sessions on everything from Unix to Exchange. This same mindset could apply equally as well to companies who operate 24x7 infrastructures. Once the system crashes at 2AM, there’s not going to be anyone there to help you, your team will be on their own, and if you have not invested in the team, it’s going to be a very long night.

If your company is not going to invest in you, you need to invest in yourself.


Regarding OS Zealotry

July 9, 2009

Today I found myself in the unfortunate situation of defending Linux to a man I think I can honestly describe as a Windows zealot. I hate doing this, as it leads to religious wars that are ultimately of no use, but it’s really my own fault for letting myself be sucked into it. It started when we were attempting to increase the size of a disk image in vmware, while Red Hat guest was running. It didn’t work, and we couldn’t find any tools to rescan the scsi bus, or anything else to get Linux to recognize that the disk was bigger. I was getting frustrated, and the zealot began to laugh, saying how easy this task was in Windows. Obviously, I felt slighted since I’m one of the Unix admins at $work, and decided I needed to defend the operating system and set of skills that pays the bills here at home. And so, we started trading snide remarks back and forth about Linux and Windows.

At one point, I told him that since Windows was so easy, MCSE’s were a dime a dozen. This is probably wrong, I don’t have anything to back it up with. Really, the entire argument was wrong, and I was dragged down to the level of grade school arguing about who’s OS was “better”. The entire thing was pointless, and wound up costing more time and effort that should have been spent on the task at hand. After giving it some thought while doing yard work this evening, I’ve decided to get out of OS debates all together.

I’ve worked as a Windows admin in the past, I even took a few tests and earned the MCSA certification back in 2002. I don’t have anything against Windows, but I don’t personally feel that it’s a technically superior server to Linux or even AIX. Windows has some administrative tools that make me drool in envy. I wish I could set up group policy, I wish I could get a Linux host to authenticate centrally as easy as it is to have a Windows server join a domain, and evidently disk management is extremely easy now. However, the real strengths of Linux are not that it is easy to use, or easy to administer. The strengths of Linux is in its stability and security.

Case in point: I’ve personally seen web hosting environments built on a default install of SLES 8 that were not patched for four and a half years, and never had a problem. Best practice? Of course not, but it worked. I’m not sure I could say the same for Windows in that same situation. Another example, another place I worked had a Linux web server who’s root partition was 100% full. This particular server was not built with LVM, so we couldn’t just extend the disk, and we also couldn’t just delete data, since we didn’t know what was needed and what was not. This server kept up and running for at least a year, and may still be running now, happily serving up web pages with a full root partition. What happens if you fill up the C:\ drive of a Windows server? I’m thinking that it crashes, but I’m not sure.

So, is a Linux server “better” than a Windows server? Is Windows “better”? In this, as in most things, the answer is: it depends. Both systems come at things from a different direction, and each show their strengths and weaknesses differently. In my experience, I’ve gained a respect for both. I prefer Linux, because honestly, I think it’s more fun.

I’ve really only worked with one other Windows zealot, and we used to argue over the use of Linux on the desktop. Linux on the desktop and Linux on the server are two totally different animals. Sure, they use the same kernel, and same basic userland apps in the shell, but other than that, they have different purposes. Arguments against using Linux on the desktop are more often than not aimed at Gnome or KDE, and not at the actual Linux OS underneath. I come to the same conclusion there as well though, certainly some things do not work as well in Linux as they do in Windows. Some things work better, but all in all, I just think Linux is more interesting.

I think what our argument came down to today was that he doesn’t understand why anyone would use Linux, since Windows, in his opinion, is so much better at everything than Linux. I think a little bit of professional courtesy would have gone a long ways here, but its just as much my fault for continuing the argument as it was his. My position on the comparison is this: yes, some things are much, much harder in Linux than in Windows… but, it’s so much more fun managing Linux. A stripped down, well oiled Linux server can be a screamer for performance and reliability. Is it easy getting there? No, but it’s worth the extra effort.


Personal Rules for Cocoa Happiness

June 29, 2009

  1. Understand every line of code written in my apps
  2. Use frameworks sparingly (see rule 1).
  3. Do not copy and paste code (see rule 1).
  4. Add lots of comments so I remember what I was doing at any given time.
  5. Manage memory as I go, don’t wait until I’m nearly finished and then go and “clean it up”.
  6. Draw out the application on paper before writing a single line of code.
  7. Search Apple’s documentation, CocoaDev mailing list archives, Google, and anything else I can think of before asking for help.
  8. When programming, quit Mail, Twitter, NetNewsWire, and IM. Turn on some non-distracting music in iTunes.
  9. Get a cup of coffee.
  10. Remember, “This is hard, you are not stupid.”

Ping from Cocoa

June 15, 2009

One of the features of Servers.app is that it pings the host and shows a little indicator for it’s status: green is good, red is dead. Getting that little feature in here was not quite as easy as I thought it would be. After searching the Apple documentation for what seemed like years, I stumbled across SimplePing. It was perfect, exactly what I was looking for. I dropped the two source code files into into my project, read through the documentation commented out in the header file and added the code to call SimplePing. It worked, and everything seemed fine. Except that it kept working, even when I knew for certain that the host did not exist.

So, I started digging through the source code for SimplePing.c, and found that instead of calling a standard error message, it called perror, which, according to the documentation, doesn’t return a value. This is fine if you want to log the ping results, but I wanted to ping the server and make a decision based on code. So, I changed a couple of lines of code from this:

if (error == 0) //was able to wait for reply successfully (whether we got a reply or not)
{
    if (gotResponse == 1)
    {
		{numberPacketsReceived = numberPacketsReceived + 1;}
	}
}

to this:

if (error == 0) //was able to wait for reply successfully (whether we got a reply or not)
{
    if (gotResponse == 1)
	{
		{numberPacketsReceived = numberPacketsReceived + 1;}
	} else {
		error = 1;
	}
}

Simply adding an extra else statement in there to return an error code gave my application something to work with. Now, from my application controller I can call SimplePing like this:

int systemUp;
systemUp = SimplePing(cString, 2, 2, 1);

After that, I can make decisions based on the value of the systemUp int. Now, it’s entirely possible that I’m doing this wrong, all I can say is that it works for me now, and it didn’t before.


Macs, Netbooks, and Education

May 19, 2009

What does it mean when an entire community springs up around hacking together a product that is not otherwise available? Apple has been adamant that it is not interested in the netbook market, but according to the many users who are breaking Apple’s EULA and installing OS X on Dell Minis, the market is there, and waiting.

Rumors have been circulating for some time now that Apple is working on a netbook. I wish I had some special insight or access to Apple’s inner-workings so I could confirm or deny those rumors, but all I can say is I hope so. To get everyone on the same page here, let me explain exactly what I mean when I say an Apple netbook. I’m talking about a small device with a ten inch (or smaller) screen, small physical keyboard, reduced hardware specs, running a legal, full copy of OS X. I’m not talking about an iPhone, or some kind of iPhone/MacBook hybrid. While its not clear if Apple would consider such a device or not, what is clear is that people would buy a $500 MacBook Nano (or NetMac?) in droves.

Part of the problem here is that this was thought up by someone other than Apple. Apple likes being the innovator in whatever it does, and it seems to me that they don’t like following where someone else has already gone. Fair enough, but by not offering what people are asking for, Apple is, in a round- about way, promoting the download of illegitimate hacked versions of OS X. They know that people want it, but they don’t know how to give it to them.

One of the arguments I’ve read against a Mac netbook is that offering such a device would cannibalize sales of their other notebook offerings. However, what that argument misses is that netbook sales are in a different sector than regular notebook sales. All of the traditional arguments against netbooks still apply, and are still just as valid. Cramped keyboard, small screen, underpowered hardware. The people who are buying netbooks are not trying to decide between a notebook or a netbook, they are not going to buy a notebook at all. The decision would not be whether to buy a Mac netbook or a MacBook, it would be whether to buy a Mac netbook or some other brand of netbook.

Traditionally, Apple has been strong in the education market. Netbooks are perfectly suited for use by students. The smaller keyboards fit their hands, and the small size and light weight of the netbooks make them easy to bring between classes, sit on a desk with room for writing, and tote back and forth from home. Without an Apple offering in this market, I would not be surprised at all to hear of increasing numbers of schools purchasing nine inch Acer netbooks.

The bottom line in the Mac netbook debate is that the market is ready for the product to appear, and willing to purchase it when it does arrive. What Apple needs to figure out is how to make a $500 computer that’s not junk. Dell, Sony, and HP seem to have figured it out. When, and if, Apple does release a netbook one thing is clear, they will make sure that it redefines the entire netbook market. Then, I’d imagine the little “NetMacs” selling by the truckload.


Jim Sanford

May 13, 2009

My Mama said if I’d be good, she’d send me to the store.
She said she’d bake some gingerbread if I would sweep the floor.
She said if I would make the beds and watch the telephone,
She would send me out to buy a chocolate ice cream cone.

And so I did, the things she said
And she made me some gingerbread.
Then I went out, just me alone
And I bought me a chocolate ice cream cone.

Now on the way home, I stubbed my toe upon a big ol’ stone.
Need I tell you that I dropped my chocolate ice cream cone.
A little doggy came along and took a great big lick.
So I hit that little doggy with a great big stick.
And he bit me, where I sit down.
And he chased me all over town.
And now I’m lost…can’t find my home.
All because of a chocolate ice cream cone!

Good night Papa, we love you.

Jan. 26, 1929 - May 9, 2009

DiggBar This

April 10, 2009

I’ve put a lot of work into this site. I’ve put thought into how I want the URLs to look, how the layout should feel, and since the site has my name on it, I wanted it to be good. I created a custom theme for the site which will purposefully only work here, and I’ve even done a bit of personal “branding” I suppose with the jb{ logo.

So, when I saw that Digg had decided to take a page out of 1999’s web play book and start framing sites inside of their DiggBar, I was more than a little annoyed. There may be only six people who read this site on a regular basis (Thanks guys!), but this site is entirely mine. The DiggBar removes the URL from the site you are reading and adds it’s own custom shortened URL, making it hard to bookmark, or really even remember where you are.

John Gruber, from Daring Fireball fame and who’s work I admire, evidently shares the same feelings I have about the DiggBar, and posted some php code to block it. I have taken his example code, and created a tiny Wordpress plugin that places his code in the correct spot in the site. The goal is to make it super easy for the thousands of wordpress installs to remove this particular menace.

Please let me know in the comments if you have any problems.

UPDATE 04/14/2011 - Since I no longer use PHP for this blog, I’ve dropped support for the DiggBar block. However, there are several great alternatives out there. Also, is the DiggBar even a thing anymore? I’m not sure.