Delicious Bookmarks
Word leaked out yesterday that Yahoo has it’s popular Delicious bookmarking service on the chopping block. I don’t personally have an account, not anymore, so the closing won’t affect me. Twitter tells a different story, my stream lit up with people upset about the decision. Yahoo’s leak, coupled with their announcement that the company is laying off 4% of it’s workforce right before Christmas, caused a fairly good sized migration from Delicious to Pinboard. I do have a Pinboard account, and I think I even have a few bookmarks saved, but its been weeks since the last time I visited the site.
There was once a time when I used del.icio.us (as it was once called) extensively, tagging my bookmarks, installing the firefox extension… I had thousands of sites bookmarked. And I never actually visited them. When Yahoo bought del.icio.us, I closed my account and deleted my bookmarks, and I’ve honestly never missed them. For a while afterwards I used a “To Read” bookmark folder in Firefox or Safari, but again, the size of that folder would grow to be unmanageable, and I’d eventually just delete everything in the folder, loosing potentially interesting reading along the way.
Then came Instapaper. My problem with bookmarks was that I did not get around to actually reading the site, and even if I did, my bookmark remained alongside every other bookmark I had. Instapaper provides a simple service to add a site (I have the keyboard shortcut in Safari mapped to ⌘+1), and a beautiful interface on the iPad to actually do the reading. The best part is, when I’m done reading, I tap the trash, select “Archive”, and the article is gone. I don’t have to worry about link rot or managing yet another boatload of data, I just read what interested me and move on.
I believe there is also a psychological reason that Instapaper is so appealing to me. It fits into a “trusted system” type of flow where I know if I send something to Instapaper I will eventually read it. Not true with a bookmarking service. Instapaper’s flow of new content is very similar to how I listen to podcasts; I sync only the unread podcasts to my iPod to listen to on the way to work. In Instapaper, I only ever see the unread articles, so I know that whatever is in my queue is something that is new and that I found interesting enough to save for later.
There are two other types of sites that I come across that I once might have thrown into del.icio.us. There are sites that I want to read every day, which are thrown into NetNewsWire, and there are sites with an amazing article that I absolutely want to maintain access to. In that case, I archive the site in Yojimbo, or I copy the text from Safari’s reader mode and add that into Yojimbo. Either way, Yojimbo becomes my long-term storage mechanism for web content.
I’ve found that Instapaper and Yojimbo fit in much more naturally than bookmarks in my flow of how I deal with web content. I still have a few bookmarks in my browser, like my bank and login pages for various things, but those are relatively static, things that I haven’t changed in years, and access on a regular basis. For everything else, Instapaper is perfect.
PC Apps
Interesting choice of wording in Apple’s most recent press release. The intent of the release is to announce the availability of the Mac App Store on January 6th, however, the interesting parts read like this:
“The App Store revolutionized mobile apps,” said Steve Jobs, Apple’s CEO. “We hope to do the same for PC apps with the Mac App Store by making finding and buying PC apps easy and fun. We can’t wait to get started on January 6.”
Apple spent a long time with the “I’m a Mac” commercials, educating the public about the differences between Macs and PCs, so why the change in wording now? Jobs is a master of nuance and wording, so I can’t imagine that addressing the Mac as a PC is an accident. This may be a continuation of a subtle suggestion that Macs are the past, and iOS is the future, by grouping the Mac along with the PC that Jobs has derided for so many years.
The Proper Place of Technology In Our Lives
It’s now the middle of December, which signals the end of my first semester ofgrad school. I took two classes, both focused on HCI: cognitive psychology andsocial implications. The paper I just finished writing for the socialimplications course was about answering the question of whether all softwareshould be free, and required a lot of research into open source, the FreeSoftware Foundation, and a lot of deep thinking about what I felt was right.
The definitions of freedom offered by the Free Software Foundation act on theassumption that computers are central to a persons well being, and that theuser of a computer should have full and complete access to the source code ofthe computer based on a natural right of well being. However, it is myposition that computers, or any other form of technology, only serve toincrease personal freedom of the user in proportion to the increase inoverall quality of life of the user of the technology.
Richard Stallman, in his essay entitled “Why Software Should Not HaveOwners” claims that authors ofsoftware can claim no natural right to their work, citing the differencebetween physical products and software, and rejecting the concept of atradition of copyright. Stallman uses an example of cooking a plate ofspaghetti to explain the difference between software and physical products:
When I cook spaghetti, I do object if someone else eats it, because then Icannot eat it. His action hurts me exactly as much as it benefits him; onlyone of us can eat the spaghetti, so the question is, which one? The smallestdistinction between us is enough to tip the ethical balance. But whether yourun or change a program I wrote affects you directly and me only indirectly.Whether you give a copy to your friend affects you and your friend much morethan it affects me. I shouldn’t have the power to tell you not to do thesethings. No one should.
However, what Stallman does not address what gives the second person whoreceives the software the right to benefit from the authors work withoutgiving something in return.
Before the industrial revolution, most people learned a skill and worked forthemselves in small communities. A single village would have all of the skillsets necessary to sustain itself, and each member of the community wouldapprentice into a particular skill set to contribute and earn a living. Theindustrial revolution pushed skilled workers into factories and assemblylines, work that was both distasteful and disdainful to an artisan in thecraft. However, corporations were able to reduce cost and increase profits,and the platform has persisted into current work environments.
In the information age, the assembly line mindset has created oceans ofcubicles filled with programmers who use their skills in small parts of largesoftware projects, sometimes to great success, but far too often to failure.The Internet and popularity of lower priced computers has created a market forhigh quality third party software, the kind that is created by someone with apassion for what they are doing. This passioncomes from learning a craft, and using that skill to earn a living, just likethe workers from before the industrial revolution. Instead of livingphysically in small villages, these new age artisans live online and createcommunities built around social networking.
In many ways, this is a return to a more natural way of life, and a simpleform of commerce. One person can create an application and sell it, andanother person can buy it from him. The person selling the software benefitsfrom being able to purchase shelter, food, and clothing for his family, andthe person who buys the software benefits from the use of the software. It isa very simple transaction, and a model that is not adequately explained in theGNU essays. If all proprietary software is wrong, then an independentdeveloper who sells software as his only job is also wrong. GNU supporterscould argue that there is nothing stopping the programmer from selling hissoftware, but he should give away the source code under a license that permitsredistribution along with the software once it is sold. At this point, sellingthe original program no longer becomes a viable business model. A programmercan not continue to sell his software when the user can, and is encouraged to,download his software from somewhere else for free.
While it may be the ethically right thing to do to purchase the software ifyou intend to use it, ethics alone are often insignificant motivation toencourage people to spend their money. If the choice of supporting thedevelopment of the software or not is entirely up to the user of the software,then purchasing the software becomes a choice that the user can make on awhim, with no real implications on the conscience of the user with eitherdecision. GNU and the GPL place this decision squarely on the user, andencourage the users to not feel in any way obligated to pay.
The ethics of open source come into question when the requirement of adheringto the free software philosophies result in an independent developer not beingable to support a moderate, middle-class lifestyle by developing a relativelypopular application. Kant’s first formulation asks what would happen if alldevelopers gave away the source of their code for free. In this imaginaryworld where all developers did this, the quality of software would go down tothe lowest common denominator of acceptability. Each developers motivationwould be to develop for himself, and since he would need to find a source ofincome elsewhere, only in the free time allotted to him. This would result ina wide variety of software availability, with very little integration ortesting, mirroring the current state of GNU/Linux based desktop operatingsystems. Current software companies would move to a business model arrangedaround providing support to customers of their software. Competition, andtherefore innovation, based on pure software features would decrease, sincethe source code of any feature another group could develop would be easilycopied and integrated into competitors products.
A second implication of business providing support as their primary source ofincome is that the support becomes the product, not the software itself.Businesses then have a vested interest in creating software that requiressupport, resulting in intentionally complicated user interfaces.
From a utilitarian point of view, the outcome of proprietary software hasclearly been to produce more pleasure for more people than open source has upto this point. Open source software is often more complicated, difficult tolearn and maintain, and harder for the average computer user to use. Appleproduces proprietary software and hardware, and states their mission to “makethe best stuff”. Using their position as a leading software company, andleveraging their control over their computing environment, including iPads,iPods, iPhones, and Mac computers, Apple has been able to successfullynegotiate deals with entertainment companies. The deals Apple has made allowthe consumer to download music, television shows, and movies off of theInternet and watch them on any Apple branded device, and output the media totheir televisions or home stereo systems. Because of the limits of DigitalRights Management, open source or free systems have not been able to providethis level of entertainment.
Free software enables the user to learn the intricacies of how the softwareworks, and modify the software to suit his needs. Free software also providesa legal and ethical alternative to expensive proprietary software indeveloping nations or areas where the cost of obtaining a license for legaluse of the software is prohibitive. Public institutions, like schools andgovernment offices, where the focus of the organization is the public good,have the option to use software that is in the public domain and is notcontrolled by any one company.
However, proprietary software is also beneficial to the public, as well asrespectful of the original authors rights regarding their creative work.Software is the result of a person’s labor; it does not matter how easy it isto copy that work, the author still retains a natural right of ownership,according to John Locke’s The Second Treatise of Government. Proprietarysoftware enables products like the iPad, which is being used to enable elderlypeople, nearly blind with cataracts, to create creative works of their own.The iPad is also being used by caretakers of severely disabledchildren toenable them to communicate and express themselves. It is possible that theiPad would have been created if the software used to power it had been free,but that is unknown. What is known is that the net result of the device is tobetter peoples lives, which is the true purpose of technology. Any technologyis merely an enabler to get more satisfaction and enjoyment out of life. Whatthe free software movement does is exaggerate the importance of a specifictype of freedom, without addressing the proper place of technology in ourlives.
However, the existence of free and open source software alongside proprietarysoftware creates a mutually beneficial loop, wherein consumers and developersare able to reap the rewards of constant innovation and competition. There isa place for both proprietary and free software, and it is the authors naturalright to their creative work that gives them the freedom to choose how and whytheir software will be distributed.
Weekend With Android
I should have known better… I do know better, but it was on sale, and it was Black Friday, and I bought an Android phone. I purchased the HTC Desire, a perfectly reasonable choice in high-end smart phones. Android 2.1, a 1GHz processor, 512M of RAM, and an 8G MicroSD card for storage. The phone is well designed, solidly built, and aesthetically pleasing, but at this point, its still on probation, I might take it back.
One of my oddly favorite things about the Desire is HTC’s marketing design. I absolutely love the hand drawn images that adorn the packaging and the HTC site. I think it gives the product a more earthy and homegrown feel, and ties in with the Android open source roots. HTC’s Sense UI is wonderful, and seems well thought out, with a few noticeable exceptions. HTC’s “polite ringer” which lowers or silences the volume of the ringtone based on the phones position is an excellent idea. Holding down the home button brings up a pane to switch between running apps. Pinching or double tapping the home button brings up an exposé type interface to choose the screen to bring up. The more I use the Sense Ui the more I like it. It is very different from iOS, but sometimes that can be a good thing.
Yesterday the phone had a bad morning. I had seen something on the Internet that I wanted to show my wife, so I took out the phone and started the browser, found the site, and waited for the content to download and render. It was taking a while, so I set it aside and turned to go back to what I was doing. After a couple of minutes I went back and found that the phone had turned off the screen, which it’s supposed to do if it’s not in use. But then, I couldn’t turn it back on. Pressing any of the buttons on the front didn’t help, and neither did pressing the button on the top. Pressing and holding the power button on the top did not help either. The phone was entirely bricked. So, I popped the back off, took out the battery, waited a minute or so, replaced the battery, and the phone powered up again.
Comparing the phone to a computer, pressing and holding on the power button overrides the operating system and kills power to the machine immediately, no questions asked. On iOS, pressing and holding the power button brings up a prompt asking you to swipe to power down the device. I’ve never once seen an iOS device die from loading a web page though, something very core to the system must be at work while browsing for that to happen. The HTC Desire ships with Android 2.1, but HTC also includes a flash plugin, perhaps that died.
While making breakfast for the kids, I put the phone in my pocket. I turned to do something on the counter, and heard a distinct “beep, beep”, and gave a surprised look to my daughter. I pulled the phone out of my pocket and found that it had pressed against my leg, activated the phone app, and started dialing a couple of numbers. Not the first time I’d pulled the phone out of my pocket and found that it had launched an app. I’ve since gotten in the habit of pressing the power button to lock the screen before the phone goes in the pocket. Again, not something I ever had to do with the iPod.
Later, I got in the car to drive to work, plugged in the Android to the aux port in the car stereo, and started one of the 5by5 podcasts to listen to on the way in to work. Nothing. I checked the phone, it was still on, the time was still ticking along, so the media player was playing the podcast, but there was no sound. I unplugged the cable from the headphone jack, and I could hear the podcast from the built-in speakers, plugged in the cable back in, and there was no sound. I said to hell with it, went in and grabbed my iPod and listened to my podcast on the way to work.
Once at work, I did some searching on Google, and found that several other people have had the problem with the headphone jack, and the fix for it was to reboot the phone. I did, and tested with a set of headphones, and sure enough, it worked again. At this point I thought… what next.
Throughout the day I’d use the phone for various things, checking Twitter, looking something up in a meeting, the kind of general mobile computing use that I’d use my iPod Touch for over wifi. After a while I noticed that the phone was generating a significant amount of heat. Not hot to the touch, but definitely much warmer than any phone or iOS device I’ve ever used. I handed the phone to a coworker and he noticed it too.
Around three in the afternoon, the battery on the phone started to die. By four, it was red, by four-thirty the phone was dead. The battery will not last for a full day of normal use. I checked to make sure that I had bluetooth, wifi, and GPS turned off, and all of them were. The Android system has an information page that details what applications were using the most battery life. Number one in my system was the Android OS itself. It’s clear that if this phone is going to stick around I’ll need a few places to charge the battery, and maybe a couple of spares to keep in the briefcase.
That was yesterday. Today, so far, has been a different story. The phone worked perfectly to check my messages this morning at breakfast. The phone worked great listening to my podcasts on the way to work. Most importantly, when I got a call from the school about one of my kids possibly having an ear infection, the phone worked great to look up the doctors office, schedule an appointment, look up the schools number, call them back, and send my wife a text message about what was going on. You know, the real work that a smart phone is meant for.
Except now it seems that 1Password has failed. Now what.
There is a lot to talk about with the HTC Sense UI, and the Android phone in general. When its good, its very good, but when it fails, it fails hard. Which is why, for now, the phone is still on probation.
Android Marketplace Inconsistencies
Living out in the farmland of Iowa where we do, there’s really only one carrier who provides decent service, US Cellular. US Cellular has a great service for battery replacement. If you find yourself out and about and your battery dies, you can drop by any US Cellular store and they will replace the battery for free. I was in that situation today, so I spent some time looking at the Android phones HTC Desire and Samsung Mesmerize.
Both phones are $280, with an $80 mail in rebate. Both phones have 1GHz processors, and both phones have 5.0 MP cameras. The main difference between the two phones is that the Samsung has a 4” Super AMOLED screen, and the HTC has a 3.7” WVGA screen. Software wise, while both phones use Android 2.1 as the core, they each have different themes, or skins. The difference in themes reminded me of the difference between KDE and Gnome on Linux. There are a few other differences; the Samsung is fully touch screen while the HTC uses hardware buttons for the four base Android buttons search, back, home, and menu. What I found most striking were the differences in the Android Marketplace.
I love Angry Birds for iOS, so I thought I’d see how the game looked and felt on Android. I searched for “Angry Birds” on the HTC and found two screens worth of knock-offs. Some of these applications took the artwork and Angry Birds name directly from the real game. There was one game called “Angry Avians”, who’s icon looked like a closeup of the red bird from the real game. There were Angry Birds wallpapers, Angry Birds books, and Angry Birds unlockers. I can’t imagine that any of these apps were actually licensed to use either the Angry Birds name or the Angry Birds artwork. They are ripoffs riding the wave of the original games success.
Pathetic, and a poor impression of the Android Marketplace.
What I did not find on the HTC was the actual Angry Birds game from Rovio. I knew that it was released, thanks to Dan Benjamin mentioning it on The Talkshow, so I checked on the Samsung. Sure enough, the Samsung search returned 53 results, and the HTC only 51, and the Samsung included the official game. I wonder how many people buy one of those ripoffs on the HTC when they can’t find the real game, knowing that it is supposedly available for “Android”.
A quick comparison of the iTunes App Store shows that there are a few Angry Birds references, walkthroughs and hints of where to find the golden eggs, but none of them use Rovio’s artwork, and none of them are copies of the game.
The Android Market is open and free, and doesn’t give one seconds thought to the end user experience. At least with the curated App Store, Apple does a decent job of keeping unethical developers from preying on users who just don’t know any better. The difference between the two markets feels like the difference between buying from an upscale mall, or buying from a back alley black market.
Emotions and Machines
I’ve been using different forms of computer “chat” for over ten years now, starting with operator-to-operator communications over a 9600baud satcom circuit in the Navy. Over time, I’ve become used to using certain forms of “emoticons” to convey subtle nuances in the conversation that are unnecessary in face to face communications. I even have friends with whom I communicate with entirely over chat.
Over the summer my cousin appeared on chat, and I tried to have a conversation with her. She was not familiar with the conversational tone and rhythm of chat, which made the interface difficult and frustrating, to the point where we both simply decided to go back to email.
Apple understands the human element of their devices possibly more than any other company in their field. Their advertising plays to your emotions, and their products are designed to elicit an emotional response; an appreciation for their beauty. Technology like Skype and Apple’s FaceTime video chat removes one layer of abstraction between you and the person you are trying to communicate with, and allows the emotional facial queues that are so important in communication to come through.
If my cousin and I were chatting face to face rather than over the keyboard, I imagine our stunted conversation would have lasted a bit longer than it did.
Another point I’d like to make about emotion and computers is that even if you do see your computer as simply a machine, a tool to accomplish a task, it is difficult to use a tool for any serious length of time, with a serious financial investment without an emotional connection to the tool. A carpenter is likely to have his favorite hammer, a mechanic his favorite ratchet, and anyone who uses a computer to create something will have their favorite brand, and strong reasons for choosing that brand.
Interaction
Last night I did my civic duty by casting my vote at the local community center. I walked down since it was not far from my house, and enjoyed the crisp night air. Once I arrived at the community center I noticed that the voting process was being run by a group of elderly women, two of whom had Lenovo laptops, which were curiously tied together by an ethernet cable. Each of the laptops had a label printer attached to it via USB, with the other USB port occupied by a mouse. As I approached one lady noticed me and asked me to fill out a form, which I did, and then asked if I had voted there before, which I had not. That turned out to be a bit of a problem, one that was easily resolved, and one that was caused entirely by the laptops.
The laptops were labeled “Primary” and “Secondary”, and each had stickers on it showing which ports on the side to attach the mouse and sticker printer to. They were each running some kind of database software that had all of our names and registration status. When I was asked if I had voted here before, I said that I had not voted in that town, but I had voted early at the county seat during the presidential elections of ‘08. They wanted to make sure I was in the database, but since they were continually having problems with the computers it took some time.
While I was waiting, a man next to me needed to be registered, so one lady asked another, who was apparently in charge, to come and help put him in the database. I overheard the two of them ask questions like “I’m not sure what the difference is between ‘accept’ and ‘apply’”, and “Ok, I don’t know what to do here, where do I go next?” I couldn’t help but wonder who had designed this system, knowing that its intended users were going to be elderly women who had little to no computer experience. One of the ladies rebooted her computer twice before she was able to get it to work again.
I leaned over to take a look at the screen, and confirmed what I had previously thought. It looked like an application left over from the Windows 3.1 days, multiple screens, buttons everywhere, seemingly random labels. How much simpler and easier could the entire night have gone if they would have given that application to a UX designer first, before sending it out to be field tested.
People have become used to computers behaving this way. They are incomprehensible, confusing machines that if you look at them wrong they break. I wanted to tell the ladies that the problems they were having with the computers were not their fault, but the fault of the people who designed the computer, the operating system, the application, and the process that must be followed to glue them all together. I wanted to tell them that it doesn’t have to be this way, and that computers are meant to make things easier, not harder; simpler, not more complicated. If they don’t, then why do we continue to use them? It would have been easier to stamp everyone with a rubber stamp last night than deal with those machines.
I wanted to tell them a lot of things, but there were people behind me in line, and it had been a long day already. So, I smiled, said thank you, and cast my vote. Then, I walked home and enjoyed the starry night.
Clean and Clutter Free
I like to keep both my desk and my computer desktop clean and clutter free. I’ve found that when there is less visual noise, I’m able to better concentrate and focus. In the article “The Proximity Compatibility Principle: Its Psychological Foundation and Relevance to Display Design”, Wickens and Carswell outline scientific principle’s that back up my personal preference.
Unless I’m actively working on a project that requires papers, my desk has nothing on it except my notebook, my computer, and a pen and pencil holder. Likewise, my desktop on my Mac is normally free from files, icons, and distracting wallpaper. If there is a file on my desktop, or if the trash needs to be emptied, I find that my attention is drawn to those cues, and I wind up dealing with them right away.
I always thought it was just because I was picky, but in the article it says
“It is clear that the negative influences of confusion and clutter will be enhanced to the extent that the contributing elements are both salient (bright, distinctive) and cannot be easily discriminated from the relevant ones. (In the visual search literature, this is known as target-distractor similarity.)” (Wickens, and Carswell 473-494)
To me, this is saying that when there are distracting visual elements, like a bright and colorful wallpaper, it takes additional mental effort to concentrate on the task at hand. Furthermore, the article also states
“Added clutter is known to disrupt visual scanning, whether this scanning is carried out by movement of the eyeball or by movement of an internal ‘attention pointer’ (Thorndyke, 1980). It is evident that the costs of scanning over (or filtering out) this clutter will be greater when there are added burdens of integration.” (Wickens, and Carswell 473-494)
When writing, or entering commands into a terminal, I find it much easier to concentrate on the task at hand when there is a single window open on my display, and the background is a neutral grey, and all non-essential visual elements are removed or hidden. I was very glad to read about Information Access Cost, it is good to know that there is actually hard science backing up my personal preferences.
An Idea About Money
So, last night, when I should have been sleeping, I had an idea. What if, instead of going to a bank to get a loan, you could ask a few of your Internet friends for a loan instead? Say you want $1000 for a new iMac, you go to some imaginary website and tell it how much you want. This website does a quick credit check to get your credit score, and then determines your interest rate based on that score. You agree to the rate, and your request is posted anonymously to the site.
Next, people with money to invest come to the site and see your request. Let’s say your agreed upon interest rate was ten percent. One person might see your request and agree to loan you $100. This person would then make nine dollars off of your loan when you pay it back. On a $1000 loan, at ten percent interest, the web site would take one percent for itself, and then split the remaining nine percent evenly between individual loaners based on how much they loan. If one person loaned the entire $1000, he would make a profit of ninety dollars off of that loan when you paid it back. If ten people each loaned $100, each of the ten would make nine dollars apiece. If one person loaned $900, and another $100, the person who loaned $900 would make $81, and the person who loaned $100 would make nine dollars.
I’m sure this has already been done somewhere, but it was a new idea to me so I thought I’d make a note of it. I’m not sure how this compares to micro-loans for businesses in third-world countries, or how it compares to Kickstarter, but I thought it was interesting. I imagine there would need to be government regulation, and some kind of federal insurance as well, for cases when someone defaulted on a loan. I’m also sure there would have to be some kind of protection against fraud, but again, this is just a fresh idea, it’s not well thought out at this point at all.
Who knows… maybe there’s something there.
The Smell of Salt
A long, long time ago, what seems like a different life now, I was a Sailor.Towards the end of my teenage years, I came to a point where I knew I had todo something with my life, and at the time, that something was not college. Myadoptive father was in the Navy, so I decided to follow in his footsteps andjoined the Navy myself in October of 1995. From June of 1996 to July of 1999 Iwas assigned to the USS Platte, an oiler. During this time I made the bestfriends of my life, met my wife, and travelled across Europe and even into theMiddle East.
It was a different world back then, back before 9/11. It was a brief time ofpeace, a period of national calm that came after the cold war was over, andbefore the war on terror began. I went on two six-month deployments to theMediterranean, Med cruises we called them. We would travel from port to port,spending anywhere from a couple of days to three weeks in port, followed by aweek or two underway.
In port, my friends and I would make a point of going out and seeing thesights during the day, before taking in the local beverages at night. I lovedthe architecture, I loved the age of some of the buildings and castles that wefound. Back here in the States, if a home gets to be one hundred years old,its an amazing thing, but over there structures built by man could last forhundreds and hundreds of years.
Sometimes, when we were out to sea, the water would be so calm it looked likeglass. Other times the waves splashed over the weather decks and wouldthreaten to wash an unwary sailor overboard. During those times I’d have totake medicine from the ship’s doctor to try to ward off the seasickness thatwould invariably come. I never got over it, in three years I never got my “sealegs” like most of the guys did. But mostly it wasn’t a problem, most of thetime we avoided the rough weather and stayed in the calmer seas to do ourrefueling of other ships. Most times the waves were small enough that I couldstand on the edge of the ship and watch flying fish dart between the crests ofthe waves. And sometimes, I would close my eyes and smell the salt in the air.
Add a User - Send an Email
I was asked on Twitter the other day why I disliked IBM’s enterprise software. This, in addition to my previous TWS rant, is my answer to that question.
We wanted to do two simple tasks. Add a new user to the system, and have the system send emails automatically when there’s a problem. Seems easy enough, unless you are using the Tivoli Workload Scheduler. Then it’s an entirely different matter.
Add A User
Some new websites like Postulous create a new user for you when you send them an email. Others like Tumblr need only three the username and password to get them setup. To add a user to TWS, you would think that there would be a nice GUI with a menu option that says “Add User”, but none exist. Instead, you have to log into the command line on the server, run the command “dumpsec”, and redirect it’s output to a file. Then, you have to vi that file, and edit the XML to add the username to the correct group. Save that file, and run “makesec filename” to load the new user into the system.
Then, restart the TWS application server. IBM is not sure if this is a required step or not, or a least the help I had on the phone wasn’t sure.
Then, you need to go into the web interface for TWS, and add the user into WebSphere as well.
It’s like the creators of TWS got together for a brainstorming session one day and asked “What is the most difficult, unintuitive way we can add a user to the system?” Congratulations, folks… I think you nailed it.
Send An Email
The heart of TWS is scheduling jobs, and then acting on the results of those jobs. One task we wanted from it was to let us know if it couldn’t run a job by sending us an email. For two years we worked with consultants and IBM to get this to work. We wound up having to get some of the original developers on the phone from Rome, and with their help we finally found the problem.
TWS stores some of it’s settings in a DB2 database. That right there is enough for me to toss the entire application in the trash. In Unix, configuration settings are stored in a plain text file, one file per application if possible. And if that wasn’t bad enough, we found that one of the binaries was modifying the configuration settings in the DB2 database when it was launched, changing the port that a certain daemon was supposed to be listening on. This daemon was responsible for listening for incoming configurations from the main server, including the configuration telling it to send the email. It’s hard for me to express how wrong this is, but I’ll try.
Any daemon should read its configuration from a file under the /etc directory. That’s how it works in Unix, and for the past thirty years its worked out pretty great. No daemon should have access to modify the configuration of any other daemon. Also, if listening on a certain port is central to the communications and functioning of the application, don’t make that configurable, just hard code the daemon to listen on that port. I suppose it would also be acceptable to allow configuration that would override the default, but only if the daemon reads its configuration from a plain text file and only if in the absence of the overriding configuration the default port is chosen to listen on.
Again, $30,000 and the application is held together with duct tape and silly putty.
Linux Hidden ARP
To enable an interface on a web server to be part of an IBM load balanced cluster, we need to be able to share an ip address between multiple machines. This breaks the IP protocol however, because you could never be sure which machine will answer for a request for that IP address. To fix this problem, we need to get down into the IP protocol and investigate how the Address Resolution Protocol or ARP, works.
Bear with me as I go into a short description on how an IP device operates on an IP network. When a device receives a packet from its network, it will look at the destination IP address and ask a series of questions from it:
- Is this MY ip address?
- Is this ip address on a network that I am directly connected to?
- Do I know how to get to this network?
If the answer to the first question is yes, then the job is done, because the packet reached its destination. If the answer is no, it asks the second question. If the answer to the second question is no, it asks the third question, and either drops the packet as unroutable, or forwards the packet on to the next IP hop, normally the device’s default gateway.
However, if the answer to the second question is yes, the device follows another method to determine how to get the packet to it’s destination. IP addresses are not really used on local networks except by higher level tools or network aware application. On the lower level, all local subnet traffic is routed by MAC address. So when the device needs to send a packet to an IP address on the subnet that it is attached to, it follows these steps:
- Check my ARP table for an IP to MAC address mapping
- If needed, issue an ARP broadcast for the IP address – an ARP broadcast is a question going out to all devices on the subnet that has the simple setup of “if this is your IP address, give me your MAC address”
- Once the reply for the ARP address is received, the packet is forwarded to the appropriate host.
So, to put this all in perspective, when multiple machines share the same IP address, each of the machines will reply to the ARP request, and depending on the order in which the replies are received, it is entirely possible that a different machine will respond each time. When this happens, it breaks the load balancing architecture, and brings us down to one server actually in use.
The next question is normally: Why is that? Why do the web servers need that IP address anyway? The answer to this is also deep in the IP protocol, and requires a brief explanation of how the load balancing architecture works.
To the outside world, there is one ip address for myserv.whatever. Our public address is 192.268.0.181 (or, whatever). This address is assigned three places on one subnet: load balancer, first web server, and second web server. The only server that is needs to respond to ARP requests is load balancer. When the load balancer receives a packet destined for 192.168.0.181, it replaces the destination MAC address with one of the addresses from one of the web servers, first web server or second web server, and forwards it on. This packet still has the original source and destination IP addresses on it, so remember what happens when an IP device on an IP network receives a packet… it asks the three questions outlined above. So, if the web servers did not have the 192.168.0.181 address assigned to them, they would drop the packet (because they are not set up to route, they would not bother asking the second or third questions). Since the web servers do have the ip address assigned to one of their interfaces, they accept the packet and respond to the request (usually an http request).
So, that covers the why?, let’s look at how?. Enable the hidden ARP function by entering the following into /etc/sysctl.conf:
# Disable response to broadcasts. # You don't want yourself becoming a Smurf amplifier.net.ipv4.icmp_echo_ignore_broadcasts = 1 # enable route verification on all interfaces net.ipv4.conf.all.rp_filter = 1 # enable ipV6 forwarding #net.ipv6.conf.all.forwarding = 1 net.ipv4.conf.all.arp_ignore = 3 net.ipv4.conf.all.arp_announce = 2
The relevant settings are explained here:
arp_ignore = 3: Do not reply for local addresses configured with scope host, only resolutions for global and link addresses are replied.
For this setting the really interesting part is the configured with
scope host part. Before, using ifconfig
to assign addresses to
interfaces we did not have the option to configure a scope on an
interface. A newer (well, relatively speaking) command, ip addr
is
needed to assign the scope of host to the loopback device. The command
to do this is:
ip addr add 192.168.0.181/32 scope host dev lo label lo:1
There are some important differences in the syntax of this command that
need to be understood to make use of it on a regular basis. The first is
the idea of a label being added to an interface. ip addr
does not
attempt to fool you into thinking that you have multiple physical
interfaces, it will allow you to add multiple addresses to an existing
interface and apply labels to them to distinguish them from each other.
The labels allow ifconfig
to read the configuration and see the labels
as different devices.
example:
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9477 errors:0 dropped:0 overruns:0 frame:0 TX packets:9477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:902055 (880.9 Kb) TX bytes:902055 (880.9 Kb) lo:1 Link encap:Local Loopback inet addr:192.168.0.181 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:16436 Metric:1lo:2 Link encap:Local Loopback inet addr:192.168.0.184 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:16436 Metric:1
Here, lo
, lo:1
, and lo:2
are viewed as separate devices by
ifconfig
.
Here is the output from the ip addr show command:
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo inet 192.168.0.181/32 scope host lo:1 inet 192.168.0.184/32 scope host lo:2 inet 192.168.0.179/32 scope host lo:3 inet 192.168.0.174/32 scope host lo:4 inet 192.168.0.199/32 scope host lo:5 inet 192.168.0.213/32 scope host lo:8 inet 192.168.0.223/32 scope host lo:9 inet 192.168.0.145/32 scope host lo:10 inet 192.168.0.217/32 scope host lo:11 inet 192.168.0.205/32 scope host lo:12 inet 192.168.0.202/32 scope host lo:13 inet6 ::1/128 scope host valid_lft forever preferred_lft forever
Here we can see that the lo:1
(etc…) addresses are assigned directly
under the standard lo
interface, and are only differentiated from the
standard loopback address by their label.
Here is the same output from the eth2 device:
4: eth2: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:10:18:2e:2e:a2 brd ff:ff:ff:ff:ff:ff inet 192.168.0.73/24 brd 192.168.0.255 scope global eth2 inet 192.168.0.186/24 brd 192.168.0.255 scope global secondary eth2:1 inet 192.168.0.183/24 brd 192.168.0.255 scope global secondary eth2:2 inet 192.168.0.176/24 brd 192.168.0.255 scope global secondary eth2:3 inet 192.168.0.178/24 brd 192.168.0.255 scope global secondary eth2:4 inet 192.168.0.201/24 brd 192.168.0.255 scope global secondary eth2:7 inet 192.168.0.212/24 brd 192.168.0.255 scope global secondary eth2:8 inet 192.168.0.222/24 brd 192.168.0.255 scope global secondary eth2:9 inet 192.168.0.147/24 brd 192.168.0.255 scope global secondary eth2:10 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary eth2:11 inet 192.168.0.46/24 brd 192.168.0.255 scope global secondary eth2:5 inet 192.168.0.208/24 brd 192.168.0.255 scope global secondary eth2:12 inet 192.168.0.204/24 brd 192.168.0.255 scope global secondary eth2:13 inet6 fe80::210:18ff:fe2e:2ea2/64 scope link valid_lft forever preferred_lft forever
Same as above, the addresses do not create virtual interfaces, they are
simply applied to the real interface and assigned a label for management
by ifconfig
.Without the label, ifconfig
will not see the assigned
address.
arp_announce = 2: Always use the best local address for this target. In this mode we ignore the source address in the IP packet and try to select local address that we prefer for talks with the target host. Such local address is selected by looking for primary IP addresses on all our subnets on the outgoing interface that include the target IP address. If no suitable local address is found we select the first local address we have on the outgoing interface or on all other interfaces, with the hope we will receive reply for our request and even sometimes no matter the source IP address we announce.
This one is a little tricky, but I believe it deals with how the web servers talk with the clients requesting web pages. In order for the web page to come up and maintain a session, when the server sends a packet back to the client, it needs to come from the source ip address of the hidden ip address. In order to do this, the web server looks at the destination address of the client packet, and then responds using that as it’s source IP address instead of it’s actual IP address. Clear as mud, right!
I hope this helps explain things a little better about the hows and whys of the web server’s side of load balancing. Note however, that I didn’t talk at all about the edge server. That’s because the edge servers job is done at the application level, and correct configuration of it does not require special consideration at the OS level.
Adamo - Apple Pushes the Industry Forward
I almost feel sorry for the other hardware manufacturers. No matter what they do, no matter what they come out with, they seem to be forever following in Apple’s footsteps. Such is the case with Adamo from Dell, a clear shot at the MacBook Air.
Adamo uses the same machined aluminum manufacturing process introduced by Apple with the Air, which has since spread very successfully to the rest of the Mac laptop line. Adamo markets itself as very thin, and very light, and has an artistic feel to their advertising that seems out of place with “cookie cutter” Dell. In fact, the marketing is almost too artistic, almost like they are trying too hard to shed their old image.
The specifications between the two lines are very similar.
Adamo | MacBook Air | |
---|---|---|
CPU | 1.2Ghz Core 2 Duo | 1.6Ghz Core 2 Duo |
RAM | 2GB | 2GB |
Display | 13.4” | 13.3” |
GPU | Intel GMX 4500 | NVIDIA GeForce 9400M |
Storage | 128GB SSD | 128GB SSD |
Weight | 4 Pounds | 3 Pounds |
Price | $1999 | $2299 |
As you can tell, this is not truly an apples to apples (pardon the pun) comparison. At this price point, the major difference between the Air and the Adamo is the $500 optional SSD. Configured with the 120GB SATA drive, the Air comes in at $1799. The Air is faster and lighter than the Adamo, and includes a dedicated NVIDIA graphics card.
A more accurate comparison may be with the MacBook, also configured with a 128GB SSD.
Adamo | MacBook | |
---|---|---|
CPU | 1.2Ghz Core 2 Duo | 2.0Ghz Core 2 Duo |
RAM | 2GB | 2GB |
Display | 13.4” | 13.3” |
GPU | Intel GMX 4500 | NVIDIA GeForce 9400M |
Storage | 128GB SSD | 128GB SSD |
Weight | 4 Pounds | 4.5 Pounds |
Price | $1999 | $1749 |
The MacBook case is larger and heavier. With the MacBook there is a half a pound difference in weight, but there is a big difference in the 2.0Ghz speed boost in the CPU. Fortunately for Dell, the Adamo is not about hardware internal hardware specs. It’s about trying to catch up with Apple in industrial design.
The screen looks gorgeous, I love the edge to edge glass design. The Adamo screen has a slightly different resolution from the standard 1280x800, coming in at 1366x768. Dell has done a great job with the Adamo. Unfortunately, its still not a MacBook killer, simply because its still not a Mac. Great industrial design is only one part of the puzzle of what makes a Mac a Mac. The other vital piece is OS X. With OS X and the Mac Apple has created a machine that drifts into the background, gets out of your way, and lets you do what you set out to do. Adamo ships with Windows Vista Home Premium 64bit. No matter how great the hardware is, if the software is not intimately tied to it the way only a company that creates both the hardware and the software can do, it’s still just another PC.
People may initially buy their first Mac because of the design, or the halo effect of the iPod. They buy their second Mac because of the experience.
A Work in Progress
A few days ago I decided that I was not going to use anyone else’s theme on my site. It happened after I stumbled across another site using the exact same theme as mine. Unavoidable really, as long as you are using someone else’s theme. So, the decision was to either stop using Wordpress, or to design my own theme. I love Wordpress, so I decided to go with the latter.
Designing a web site is a strange mix of code and graphic design. In my case, I’ve had to go back to php, a language I left a long time ago, and start learning CSS. Since I’ve been fooling around in Cocoa for quite a while, going back to php is just painful. Objective-C is a beautiful programming language. Mixing php and html… well, that’s just plain ugly. However, that being said, it’s familiar territory, so I almost feel like I’m coming home. One concept that I’ve learned with Cocoa is the Modal-View-Controller method, basically separating out the presentation code from the application code (yes, I know there is a lot, lot, lot more to it than that… no I’m not going to get into it here), using CSS kind of reminds me of the MVC method, in your php/xhtml code you define what objects are going to be displayed, and in CSS you define where and how to display them. I like the separation… keeps it clean.
At any rate, I’ve been busy coming up with the overall look and feel of the site. One thing I believe about software is that simplicity always wins. At least where I’m concerned it does, that’s why I use a lot of the apps that I use, because they are simple to use. Think about the Google home page. Simple, and it wins.
I’d appreciate any comments on the design, and please keep in mind this is only a very early mockup. Also, I’m going to be using this as my avatar for everywhere that I’ve got an account online: A friend of mine, who actually is a designer, laughed when I told him about the tools I’ve been using to do the design so far. First, the initial concept was created in OmniGraffle. From OmniGraffle, I’d export it as a Photoshop file and open it in Pixelmator to add the leaves and other touch ups. Right now, that’s as far as I’ve got. I’ll finish the design in the next couple of days, and then move into chopping the file up and getting deep into some code. Hopefully, I’ll have this finished in two or three weeks.
AutoYast
I wrote this last year and never posted it. I’m glad I found it and can post it now.
One of the projects I’ve been working on in the past week has been a rapid deployment server for SLES 9. I would have liked to deploy SLES 10, but we are constrained by our application requirements. Novell has done a great job at making it easy to deploy SLES or SLED using their Autoinstall and Installation Server options available through YaST. Using Autoinstall, YaST steps you through the options required to generate an xml file, this xml file is read by YaST during system install and automates the process. To build a network installation source, the contents of the CDs or DVD need to be copied to the hard drive, preserving symbolic links. YaSTs Installation Server makes this easy, and also makes “slipstreaming” (to borrow a Windows term) a service pack into the install source automatic. I’ve built the network install source both ways, and I prefer using YaST to do it for me.
Even with all this being said, YaST (in SLES 9) is still missing some features that require me to edit the xml file directly. The most important feature it’s missing, which they included in SLES 10, is the ability to create LVM volumes during partitioning. Not to say that it’s not possible, it just requires editing the xml source file. Using a little trail and error, I was able to partition the drive with a 200MB /boot (too big, I know), a 2GB swap, and then partition the rest of the drive as LVM, and then mount /, /var, /opt, /usr, /tmp, /home, and /work inside the lvm. Works like a charm. If you need a working autoinst.xml file, you can download mine here.
This setup is great, but it required me to boot off of the CD, and then enter a long install=nfs://bla bla bla/bla bla autoyast=nfs://blalbalba line at boot time. To really make the system work, I needed network booting for full automation. I found a great walk through in this pdf, which surprisingly enough, worked for me the first time. I had to install tftp, syslinux, and dhcp-server rpms, then edit a couple of files, move a couple of things, really no big deal.
Now, I’m ready. When we get 100+ servers in, which I’m told I’ll have 7 days to install, I’ll be able to say “what would you like me to do with the rest of the time?”
Account Pruning
I’m a geek. Understanding that little fact puts me a little closer to being in touch with myself, and understanding that I’ve got a habit of trying out every new service or technology that comes along. That’s fun, but in the case of online services, I wind up with accounts all over the place. So, the past few days I’ve been pruning my online accounts down to what I really need.
I started out with AOL in the ’90s, and quickly learned that AOL masked the rest of the Internet, and that all I needed was a connection and a real browser. From there, I went to Hotmail, and stuck with Hotmail for several years, up until the time I bought my first Mac. Since getting an iBook in ‘03 I’ve had a .Mac account, but it hasn’t always been the best email service. I also tried Yahoo mail, but I’ve never liked any of the Yahoo services… too gaudy for my taste. Then came Gmail, which is an excellent service, and one that I’ve stuck with for quite a while now. However, there’s also the new MobileMe, which promises to synchronize everything everywhere, and since MobileMe offers a ton of other features, and integrates seamlessly into my MacBook, I’m sold on it. I’ve actually gone back and forth between MobileMe and GMail quite a bit. Since Gmail can download pop3 mail and send mail as my MobileMe account, it makes it easy to switch over to it. However, I’m not a big fan of the Gmail design, or any of the skins that I’ve seen, and I really like to use Mail.app for my email. I could use pop with Gmail, which works great, or I could use IMAP, which doesn’t work so great with Gmail but works perfectly with MobileMe. I also really love the visual design of MobileMe. I think it’s uncluttered and smooth. No ads (since it’s a paid service).
So, anyway… I wind up with five or six email accounts spread out across the interwebs. I’ve been closing them one at a time, and it’s not always easy. A lot of companies, like Microsoft and Yahoo, will not just close your account straight off. You have to request that it be closed and then not attempt to log back in under that user name for three or four months.
social
I’ve had accounts on MySpace, Facebook, LinkedIN, Friendster, Orkut, Del.icio.us, Flickr, Twitter, Pounce… and probably a few more that I cannot think of right now. I thought all of them were fun, but of limited functionality. Over the past couple weeks I’ve pruned them down to Twitter and Flickr, and I think I’ll keep it at that. Twitter is fun, and it’s got some additional services that I use every day. I follow CNN Breaking News, and get a text message as soon as some important news story develops. It’s not too frequent, only really big stuff. Some of the people I follow also point out some really cool stuff on twitter that they don’t mention anywhere else. Flickr is a great photography site, and I’ve had a pro account in the past, but not now. One reason I’m really not happy with Flickr is that I have to have a Yahoo account to use it. I’ve already said that I don’t like Yahoo that much, and would much rather close the account all together. Also, MobileMe has a photo sharing service that integrates right into iPhoto, although that service is not nearly as “social” as Flickr, it’s still a way to get my pictures out there. If I can find a way to integrate the MobileMe galleries into Wordpress we’ll be in business. I consider the relationship between Flickr and myself “Under Review”. Del,icio.us = same story… great service… bought by Yahoo… I don’t want a Yahoo account.
storage
When I was looking at a replacement for MobileMe, I needed a replacement for iDisk. The best I could find was Box.net, but I could never get the webdav mount working properly in Linux. iDisk works great, pretty much every time, I rarely have a problem with it. Also, I can sync my iDisk locally to my Mac, which means I still get off line access. If, that is, I were ever off line. I also looked at Microsoft’s Live services, but I don’t think I ever actually used the online storage for anything.
elsewheres
I wind up trying out all kinds of beta services and startup companies web sites. I’ve loved all the new Web2.0 design that’s been so big lately. To be honest, I can’t even begin to count all the services I’ve signed up for. I’m hoping that eventually the accounts that I’ve signed up for but no longer use will expire. I think what I really need to do is come up with a fake online identity and use that to sign up for all the accounts that I’m just trying, verses the accounts that I actually use on a daily basis. On the other hand, why should I even bother with shutting them down? Well, to answer that, I’ve got to go back to my opening sentence: I’m a geek. Being a geek also means that I really like to keep things organized, and if it’s not, it drives me nuts. I need to prune, or I can’t concentrate, it’s an open loop, and open loops need to be closed.
Agility
To create the perfect datacenter, what would you recommend? For me, the perfect datacenter would be based on agility. We would be able to add new capacity when needed, and reallocate resources whenever needed, quickly and easily. We would be able to backup everything, securely and easily, off-site. We would use, whenever possible, open source software so we would not be constricted by licensing schemes. Would we have a SAN? Yes, most likely something very simple to administrate, like a NetApp. We would boot from the SAN, have no moving parts in the servers themselves, so we would have very few hardware failures. Whenever possible we would keep to one style of hardware, ie: all blades, or all 1U rack mounts, etc…
We would purchase the servers as needed. Purchasing the equipment instead of leasing it gives us more flexibility, and decreases overall cost. We still abide by the server life cycle, but instead of returning older servers to the vendor, we re-purpose them by migrating them over to test and development, or management servers. Then, when the servers have truly reached the end of serviceable life, drop them on ebay to recoup a bit of the cost. 1 We would purchase racks with built in cooling, fans at top and bottom. We would have an ambient temperature sensor hooked up to Nagios to keep an eye on the environment. Nagios, of course, would keep an eye on everything else as well.
We would run our cabling under the floor in Snake Tray, gigabit Ethernet to the servers, maybe 10 Gig fiber backbone between the switches and routers? It may be expensive to implement, but it would last, and provide more than enough bandwidth. I would build a pair of OpenBSD firewalls with automatic failover and load balancing, one pair for each of two Internet connections. I suppose there would have to be two sets of firewalls on each Internet uplink, to provide a DMZ, a good place for a Snort system.
We would deploy a single operating system, possibly Ubuntu. Something with commercial support if needed, but enough freedom to keep things moving the was we want them to… no restrictions. The Ubuntu server is not bad, and with Canonical providing support its reliable enough to build a business off of. Keep everything at the same patch and kernel level.
Yes, this is a pipe dream. In reality datacenters are heterogeneous, organically grown, and often stuck together with duct tape and bubble-gum. What would we build though, if we had hundreds of thousands of dollars and a blank slate to work off of? If the task were given to me, this is what I would build.
- We may not be able to recoup much from eBay, but its better to sell them there than junk them altogether. The servers may end up in a hobbyist’s garage, building the next version of Linux!
9/11
I slept. The world changed all around me, and I slept.
My wife woke me up in a shaking, excited voice, “Jon, get up! Terroristsattacked the World Trade Center!” I remember thinking that she must have beenwatching a movie of some sort, and I dozed off again. A few minutes later, Igot up, put on some sweats and a flannel shirt, and went downstairs to seewhat she was talking about.Rhonda’s eyes were wet and red with shock. She saton the edge of our recliner,her knees together, her elbows on her knees, andher chin in her hands. She was watching television, and again I thought thatshe was watching a movie. Attack on America, the headline said in bright redletters at the bottom of the screen. We were watching CNN.
Suddenly feeling the need not to stand, I took a seat on our couch. Thepicture on our screen showed the World Trade Centers moldering, smoking, andburning.The reporter spoke of chaos, of passenger airplanes crashing into thePentagon and into both towers of the World Trade Center. As we watched, thestructural integrity of the first tower failed, and the tower collapsed.Rhonda began to cry. I looked at the screen in disbelief. For some reason, Icould not get the idea out of my head that we were watching a movie. I had thesensation that we were being fooled, like the Orson Welles 1938 radiobroadcast of “War of the Worlds.” I knew, however, that this was not “War ofthe Worlds”; this was more like Pearl Harbor, this was War.
Not long after the first tower fell, the second collapsed in on itself. Duringthe second tower collapse, America lost many of its bravest men and women; thefirefighters and policemen we now herald as fallen heroes. I walked into thedining room to use the phone. I called into work to make sure that the watchwas paying attention to the message traffic. Working in militarycommunications, I knew that they would have a lot of correspondence to dealwith. My overly excitable Chief answered the phone. I asked to talk to someoneon watch, since I was obviously not going to get any answers out of him; hesounded as if he were going to have a heart attack.
“Well, we’re pretty busy, is this really important?’ He asked.
“I just wanna make sure that the watch is keeping a close eye” I responded.
“Oh, yea, we’re really jumpin’ here, everybody’s really busy, we got flashtraffic coming all over the place!”
I found out later that he was the only one running around.
“Ok, thanks Chief.”
I got off the phone and returned to the living room.My wife was in tears,horrified and amazed that such a thing could happen. I sat back down on thecouch and began to contemplate my own feelings. At first it seemed that I feltnothing. I remember wondering if I were so desensitized to violence that I wasincapable of feeling anything. As I watched the firemen, policemen, andvolunteers sift through the rubble, a desire came over me, a desire to help. Iwanted to do something! I wanted to be in New York or Washington D.C. I wantedto help, to put hand to brick and begin the immediate repair of what the evilmen had destroyed. I wanted to be surrounded by the smoke, ash, and debris. Iwanted to put goggles over my eyes and a handkerchief over my mouth and makemy way though the rubble the fallen towers had left. I wanted to findsurvivors. I wanted to hear the cries of victims, to smell the burning of thebuildings to feel the dust collect on my clothes as I worked day and night tofind survivors.
I could do none of those things. I was in my living room, watching television.We watched the news for the rest of the day, and when night came, I slept.