jb… a weblog by Jonathan Buys

The Coffee Cup

December 24, 2008

I’ve had this coffee cup on my desk at work for the past year or so now. It’s just a plain white cup, with the Ubuntu logo on it. I got it from CafePress. I loved it, for one, because the Ubuntu logo is great. Best Linux logo out there. I also loved it because as I was thinking about how to solve one problem or another, the cup was normally there with hot coffee waiting to be sipped as I pondered the solutions. Today I picked up the cup, walked towards the coffee pot, and dropped the cup. My wonderful Ubuntu coffee cup shattered as it hit the floor.

I loved that cup, so I didn’t want to break it. However, it seems appropriate, as today I also switched back to Windows at work. I’ve been running Ubuntu as my primary desktop at work for several months, and running XP in VirtualBox when needed. Lately, I’ve been needing the VM more and more, as I do more diagramming and planning in VMWare Infrastructure Client and Visio, both Microsoft centric applications. Also, rumor has it that in the next couple of months we will be replacing our aging Lotus Notes servers with Microsoft’s Exchange 2007. IBM released a Linux native Notes client which supports Ubuntu, and really works great. When we made the switch to Exchange, I was hoping to use the Evolution client that comes with Ubuntu. Unfortunately, Microsoft changed the MAPI standard for communicating with the server in Exchange 2007, and there is no supported Linux client. Which left me with two choices. Run Outlook in my VM, or moved everything back to Windows and conform to company standards. I debated this in my head for a couple of weeks, but in the past three days I’ve had X crash on me three times in Ubuntu. When X crashes, it takes all of my X applications with it, along with the data… it’s like Windows ‘95 all over again.

X crashing for no apparent reason was the nail in the coffin for me. I moved all my data over with a USB drive, and Monday I’ll format the Linux partition and fdisk /mbr from the XP recovery console.

I’ve really enjoyed using Linux, but honestly, it’s kind of relieving to be back in a supported environment again. There are still quite a few desktop tools missing from Ubuntu that are available on Macs and Windows. My current favorite so far is Evernote, with the aforementioned Visio running a close second. Launchy is nice… not as nice as Quicksilver or Gnome-Do, but nice.

Mentioning Gnome-Do brings up another point. Gnome-Do has been acting up lately, catching on something or other and eating up 99% CPU. The developers are aware of the problem, and are working on a solution. However, using Gnome-Do as an example, the very idea of what they are doing with “Release Early, Release Often”, completely goes against the grain of a business desktop. Any Linux desktop will contain beta-quality code, and when I’m relying on a computer to do my job, I can’t have it acting as a beta tester. Ubuntu is doing lots of cool stuff with 3D desktops and cutting edge software, but I don’t need it to be cool, I need it to work. Reliably.

One last note about why I’m not using Ubuntu at work any more. My computer is a Dell laptop, mostly used in a docking station, attached to a 22 inch monitor. I noticed after a while that my laptop was getting really hot in the docking station, and I couldn’t tell if Ubuntu was reading the docking station correctly or if it was displaying on both the internal monitor and the external monitor. When I popped the lid on the laptop, the monitor either came on suddenly or was on the entire time, and the keyboard was hot to the touch. In the Gnome “Screen Resolution” preferences I found that I could turn the monitor off from there, and I think that solved that issue, but I’m not sure. I’d hate to think that I was actually causing the hardware harm by running linux on it. I don’t want to spread FUD, but if its true, its true. When I’m running Windows, I don’t have that problem at all.

So, now I’m looking for a new coffee cup… something to inspire me, and be my companion in my little beige box. Whatever the new design is, it needs to be something that will last, something reliable, and something that’s in it for the long haul. Ubuntu has been good to me, both the OS, and the coffee cup, but in the end, they both broke, and I’ve got to move on.


How to Fix Linux

December 24, 2008

It’s been nine years since I first installed Linux on a computer of mine. It didn’t last long back then, since I actually wanted to use the computer for surfing the web, sending email, and playing games. Linux has come a long way since then, and now it’s a reliable desktop system at work. However, my system is reliable (and enjoyable) because I am a geek, and I know exactly what it needs to make it run smooth.

For example, I use a dual screen set up with Xrandr, and it works well, especially with the new composting ability in Gnome 2.22. Getting xrandr to work properly required me to edit a few lines in the xorg.conf file to allow for a large maximum desktop size. Compare this with how Windows deals with dual monitors. You plug it in, turn it on, enable it in the preferences, and there you go. Now compare that with how Macs handle dual monitors. You plug in another monitor and it works… bottom line.

I realize that there are a lot of valid reasons for this. Proprietary hardware, lack of documentation, legacy code, etc, etc, etc…

I also realize that 99% of people who use a computer do not care.

Computers should be like appliances… plug them in, turn them on, and start using them. I like the toaster analogy. I don’t care how my toaster works. I just want it to make my toast in the morning. I don’t care if I have the “freedom” to take the toaster apart and study the timing mechanisem and re-create it in yet another toaster. I just want toast. I use Linux at work because it is an amazing server. I’ve seen Linux servers keep on trucking through some environments that would kill another OS. I use Linux on my laptop at work becuase it integrates in with my workflow and scripting perfectly. I don’t use it for the freedom, I use it because it works well for my environment. My environment however, is very different from the average home user.

If Linux is ever going to take hold it needs to learn a few things from Apple. Here are a few points that will bring Linux onto Mom and Dad’s desk.

  1. Own the hardware. Apple controls all aspects of its hardware/software relationship. I’m not saying that a particular Linux distro would only be able to be installed on whatever vendors hardware, but I’m saying that a hardware vendor should adopt Linux and fully develop the OS in house to support its hardware.

  2. Manically control the distro. The open source nature of Linux makes this very difficult, but it also makes Linux feel… disjointed. This goes hand in hand with point 1, develop the hardware and software in tandem to support each other. Release the code as GPL, sure… give it back to the community, but control what goes into the distro and develop it in-house. Linux has a bad habit of adding features because it can. Compiz has hundreds of options for rediculous visual features that no one needs (and a few that everyone does need!), and which really distract from using the computer. Another example: Do you think KDE 4.0 would have made it past the desk of Steve Jobs? Can you imagine what that conversation would have been like? Which leads me to my next point:

  3. Appoint someone Grand High Poomba. Someone needs to be there to say “No”. Someone who will say, yea, that’s a great idea, but we are not going to do that in this distro. At times I think that Mark Shuttleworth is that guy… and then there are other times when he decides to ship every six months if Ubuntu is ready or not. Shuttleworth could be that guy.

  4. Provide the end to end user experience. When I pop open the lid of my laptop, I expect to be able to start using it within seconds. When I close the lid, I expect the laptop to go to sleep and wait for me to need it again. When the hardware is fused with the software, and features are controlled and perfected, the result is a very fluid, intuitive experience that brings the users back.

The last item I’d love to see from Linux is to stop adding new features. Seriously, stop adding new, untested, beta quality code and spend some time perfecting what is already there. Fix what you have first. Then, and only then start adding new features.

Linux, we’ve come a long ways, but we’ve still got a long ways to go.


AutoYast

December 24, 2008

I wrote this last year and never posted it. I’m glad I found it and can post it now.

One of the projects I’ve been working on in the past week has been a rapid deployment server for SLES 9. I would have liked to deploy SLES 10, but we are constrained by our application requirements. Novell has done a great job at making it easy to deploy SLES or SLED using their Autoinstall and Installation Server options available through YaST. Using Autoinstall, YaST steps you through the options required to generate an xml file, this xml file is read by YaST during system install and automates the process. To build a network installation source, the contents of the CDs or DVD need to be copied to the hard drive, preserving symbolic links. YaSTs Installation Server makes this easy, and also makes “slipstreaming” (to borrow a Windows term) a service pack into the install source automatic. I’ve built the network install source both ways, and I prefer using YaST to do it for me.

Even with all this being said, YaST (in SLES 9) is still missing some features that require me to edit the xml file directly. The most important feature it’s missing, which they included in SLES 10, is the ability to create LVM volumes during partitioning. Not to say that it’s not possible, it just requires editing the xml source file. Using a little trail and error, I was able to partition the drive with a 200MB /boot (too big, I know), a 2GB swap, and then partition the rest of the drive as LVM, and then mount /, /var, /opt, /usr, /tmp, /home, and /work inside the lvm. Works like a charm. If you need a working autoinst.xml file, you can download mine here.

This setup is great, but it required me to boot off of the CD, and then enter a long install=nfs://bla bla bla/bla bla autoyast=nfs://blalbalba line at boot time. To really make the system work, I needed network booting for full automation. I found a great walk through in this pdf, which surprisingly enough, worked for me the first time. I had to install tftp, syslinux, and dhcp-server rpms, then edit a couple of files, move a couple of things, really no big deal.

Now, I’m ready. When we get 100+ servers in, which I’m told I’ll have 7 days to install, I’ll be able to say “what would you like me to do with the rest of the time?”


Build Something Better

December 19, 2008

What would it take to change computers? What would it take to build something truly revolutionary in a time where most of the design philosophy of a computer is taken for granted?

I spend an inordinate amount of time thinking about ways to make computers better. Much of that thought is dedicated to software, but an equal amount of daydreaming is allocated to hardware. In my mind, the two go together like peas and carrots, ying and yang. A couple of years ago I wrote a college paper that talked about the UMPC market, which has now evolved into netbooks. I couldn’t stand the UMPC interface or hardware, but something about the idea of a very small, very portable computer appealed to me. Massive amounts of information, instantly available, wherever you are. I took the ideas of tablet PCs and UMPCs and designed something I called the FarmDog.

FarmDog was a tablet PC with an OS that ran off of a removable flash drive. The OS ran off of that drive, but all applications and user data resided on a hard drive, which was also removable. The idea was that you could keep your OS and your data completely separate, and also make it very, very easy to back up your system. Upgrading to a new operating system would involve buying a new chip from the store, shutting down the PC, putting in the new chip, and restarting the PC. You could switch between OS’s whenever you like, so if one is giving you problems you could go back to a previous revision.

A second part of the FarmDog was the dock. Normally, when you were running around with your PC, it acted as a tablet. However, when you put FarmDog in the dock (vertically, I didn’t imagine it being docked as a widescreen.) you had the setup of a regular desktop computer. Keyboard, mouse, etc… the main difference with the FarmDog dock was the automatic drive duplication. Now, we would probably want to implement something like Apple’s Time Machine, but at the time, I was thinking of duplicating the entire data drive to an external disk every time the PC was docked. This would keep a good backup of your applications and data, just in case.

I believe that the secret to a great consumer computer is a tight bond [between] software and hardware, coupled with great design in both. Apple has this just about nailed with their new MacBooks, but I’m still left wondering, how could we build something better, something different. I think FarmDog could be the start of something, I just wish I had the funds to build it.


Account Pruning

December 19, 2008

I’m a geek. Understanding that little fact puts me a little closer to being in touch with myself, and understanding that I’ve got a habit of trying out every new service or technology that comes along. That’s fun, but in the case of online services, I wind up with accounts all over the place. So, the past few days I’ve been pruning my online accounts down to what I really need.

email

I started out with AOL in the ’90s, and quickly learned that AOL masked the rest of the Internet, and that all I needed was a connection and a real browser. From there, I went to Hotmail, and stuck with Hotmail for several years, up until the time I bought my first Mac. Since getting an iBook in ‘03 I’ve had a .Mac account, but it hasn’t always been the best email service. I also tried Yahoo mail, but I’ve never liked any of the Yahoo services… too gaudy for my taste. Then came Gmail, which is an excellent service, and one that I’ve stuck with for quite a while now. However, there’s also the new MobileMe, which promises to synchronize everything everywhere, and since MobileMe offers a ton of other features, and integrates seamlessly into my MacBook, I’m sold on it. I’ve actually gone back and forth between MobileMe and GMail quite a bit. Since Gmail can download pop3 mail and send mail as my MobileMe account, it makes it easy to switch over to it. However, I’m not a big fan of the Gmail design, or any of the skins that I’ve seen, and I really like to use Mail.app for my email. I could use pop with Gmail, which works great, or I could use IMAP, which doesn’t work so great with Gmail but works perfectly with MobileMe. I also really love the visual design of MobileMe. I think it’s uncluttered and smooth. No ads (since it’s a paid service).

So, anyway… I wind up with five or six email accounts spread out across the interwebs. I’ve been closing them one at a time, and it’s not always easy. A lot of companies, like Microsoft and Yahoo, will not just close your account straight off. You have to request that it be closed and then not attempt to log back in under that user name for three or four months.

social

I’ve had accounts on MySpace, Facebook, LinkedIN, Friendster, Orkut, Del.icio.us, Flickr, Twitter, Pounce… and probably a few more that I cannot think of right now. I thought all of them were fun, but of limited functionality. Over the past couple weeks I’ve pruned them down to Twitter and Flickr, and I think I’ll keep it at that. Twitter is fun, and it’s got some additional services that I use every day. I follow CNN Breaking News, and get a text message as soon as some important news story develops. It’s not too frequent, only really big stuff. Some of the people I follow also point out some really cool stuff on twitter that they don’t mention anywhere else. Flickr is a great photography site, and I’ve had a pro account in the past, but not now. One reason I’m really not happy with Flickr is that I have to have a Yahoo account to use it. I’ve already said that I don’t like Yahoo that much, and would much rather close the account all together. Also, MobileMe has a photo sharing service that integrates right into iPhoto, although that service is not nearly as “social” as Flickr, it’s still a way to get my pictures out there. If I can find a way to integrate the MobileMe galleries into Wordpress we’ll be in business. I consider the relationship between Flickr and myself “Under Review”. Del,icio.us = same story… great service… bought by Yahoo… I don’t want a Yahoo account.

storage

When I was looking at a replacement for MobileMe, I needed a replacement for iDisk. The best I could find was Box.net, but I could never get the webdav mount working properly in Linux. iDisk works great, pretty much every time, I rarely have a problem with it. Also, I can sync my iDisk locally to my Mac, which means I still get off line access. If, that is, I were ever off line. I also looked at Microsoft’s Live services, but I don’t think I ever actually used the online storage for anything.

elsewheres

I wind up trying out all kinds of beta services and startup companies web sites. I’ve loved all the new Web2.0 design that’s been so big lately. To be honest, I can’t even begin to count all the services I’ve signed up for. I’m hoping that eventually the accounts that I’ve signed up for but no longer use will expire. I think what I really need to do is come up with a fake online identity and use that to sign up for all the accounts that I’m just trying, verses the accounts that I actually use on a daily basis. On the other hand, why should I even bother with shutting them down? Well, to answer that, I’ve got to go back to my opening sentence: I’m a geek. Being a geek also means that I really like to keep things organized, and if it’s not, it drives me nuts. I need to prune, or I can’t concentrate, it’s an open loop, and open loops need to be closed.


Nagios Check Scheduling

November 5, 2008

Or, maybe a better title for this would be “They rebooted the server, why didn’t I get a page?” I’ve had that question asked of me a few times, and I’ve never had a good answer, so I thought I’d take a closer look at Nagios and see what is going on.

Inside of nagios.conf are six values that are important to consider. The first is the Service Inter-Check Delay Method. This is the method that Nagios should use initially.

spreading out service checks when it starts monitoring. The default is to use smart delay calculation, which will try to space all service checks out evenly to minimize CPU load. Using the dumb setting will cause all checks to be scheduled at the same time (with no delay between them)! This is not a good thing for production, but is useful when testing the parallelization functionality.

  • n = None - don’t use any delay between checks
  • d = Use a “dumb” delay of 1 second between checks
  • s = Use “smart” inter-check delay calculation
  • x.xx = Use an inter-check delay of x.xx seconds

The next setting to look at is the Service Check Interleave Factor.

This variable determines how service checks are interleaved. Interleaving the service checks allows for a more even distribution of service checks and reduced load on remote hosts. Setting this value to 1 is equivalent to how versions of Nagios previous to 0.0.5 did service checks. Set this value to s (smart) for automatic calculation of the interleave factor unless you have a specific reason to change it.

  • s = Use “smart” interleave factor calculation
  • x = Use an interleave factor of x, where x is a number greater than or equal to 1.

I love it when there is good documentation in the config files. So, there are several checks running at once, and they are spaced out how the Nagios application thinks is best, but how many are running at once? This is determined by the next variable, Maximum Concurrent Service Checks.

This option allows you to specify the maximum number of service checks that can be run in parallel at any given time. Specifying a value of 1 for this variable essentially prevents any service checks from being parallelized. A value of 0 will not restrict the number of concurrent checks that are being executed.

Our variable here is set to 0, unrestricted.

The third item that caught my eye is the Service Reaper Frequency variable.

This is the frequency (in seconds!) that Nagios will process the results of services that have been checked.

Our variable here is set to 10, so every 10 seconds Nagios processes the results of the checks.

The last value is actually a group of values collectively known as Timeout Values.

These options control how much time Nagios will allow various types of commands to execute before killing them off. Options are available for controlling maximum time allotted for service checks, host checks, event handlers, notifications, the ocsp command, and performance data commands. All values are in seconds.

Our values are:

  • service_check_timeout=60
  • host_check_timeout=30
  • event_handler_timeout=30
  • notification_timeout=30
  • ocsp_timeout=5
  • perfdata_timeout=5

Knowing the theory is good, but it is also good to know the exact times between checks. In the Nagios web interface there is a page for each service that is monitored with the label “Service State Information”. On this page I found the timestamp for the “Last Check Time” and the “Next Scheduled Check”. Looking at several of these I found that each service check is five minutes apart… down to the second.

One last item to consider is that Nagios gives each check three chances to correct itself. This means that if Nagios finds an error, it immediately schedules the next check of the service. (Ping also being considered a “service”)

So, what all this means is a very long-winded explanation of what I thought was happening. The server was rebooted right after a service check, and it came back up before the next service check was executed.


Essentials

November 4, 2008

Inspired by Mark Pilgrim’s Essentials post, I thought I’d come up with my own list of essential software.

  1. Mac OS X. Closed source or open source, I lean towards the system that performs the best. Simplicity, security, and reliability have made me a Mac fan for years. I’ve tried to love Linux, I really have, but the love is just never returned.

  2. Safari: I don’t think Safari is really “The world’s best browser”, but I do think its the best browser on OSX.

  3. Adium: Adium beats out iChat in just about everything. I could care less about video chat, and I type just fine. If I want to talk to someone, I’ll call them. Also, since $WORK uses Lotus Notes and Sametime, Adium supports getting on the chat network from home when I’m on the VPN.

  4. Shimo: Speaking of the VPN, Shimo is amazing. A major leap forward from the ridiculous Cisco VPN client on the Mac. It’s awesome, and I can’t live without it.

  5. iTunes: iTunes is not the most lightweight media player on the market, but it’s certainly the best at what it does on the Mac. It’s a mix of music, movies, tv shows, and even applications for the iPhone or iPod touch. And, if you are smart and shop at Amazon MP3, you get cheaper music, better quality, and none of the DRM that you get from the ITMS.

  6. iCal + Google Calendar: I don’t really know why I haven’t moved my calendar over to gCal completely, I really do like the UI of iCal, so maybe that’s it. Or, maybe its the desktop integration. Whatever it is, I’ve moved past Mail.app in favor of Gmail, but I’m still using iCal for my calendars.

  7. iPhoto: There’s really no better alternative available to iPhoto on the Mac. It’s either that, or using the Finder to manage my photos, and since I’ve got several years invested in iPhoto, I really don’t feel like switching to anything else.

  8. Preview: Preview is far faster than Adobe Reader for viewing PDFs, but more than that, it works great as a simple image editor as well. You can edit icons, crop and resize screenshots for the web, and annotate PDFs. Preview is one of the apps that I miss when I’m away from my Mac.

  9. MarsEdit: I dispise typing in any online, web based text editor, so Mars Edit is a life saver. Always under active development, MarsEdit has great support for Wordpress, which is all I really need. Another must have.

  10. OmniGraffle: Every now and again I need to make a graph, or a chart, or a mind map, and when I do, OmniGraffle has me covered. It’s sometimes touted as Visio for the Mac, but I think OmniGraffle is in a class all its own.

  11. TextMate: Because I can’t get my .exrc file from vi working exactly the way I want it just yet. For writing, its either TextMate or vi, and really, it’s a bit of a toss-up. I use vi for just about everything at $WORK, and I may begin using it at home, but till then, TextMate is the next best thing.

  12. Yojimbo: I collect random bits of information from all kinds of places… pictures, web pages, passwords, serial numbers, bookmarks… everything gets dumped into Yojimbo.

  13. Time Machine: I loves me some backups, and its good to know that Apple has me covered with the Time Machine. All I have to do is remember to plug in my MyBook and I’m backed up. That, and the GUI to restore files is way more fun than TSM.

  14. Xcode: I’ve dabbled in development, and I’m planning on returning soon. If you want to develop on the Mac, you have to use Xcode.

  15. Spotlight: Spotlight is an amazing technology, far more advanced than other desktop search applications. Spotlight does not rely on periodic indexing of the hard drive. Instead, Spotlight indexes the system once, and from then on, every time a file is changed, the change is written to both the file, and to the Spotlight index. Keeping the spotlight index up to date, instantly, all the time. Introduced in Tiger, Spotlight has really matured in Leopard. Its so fast that I now use it for everything that I used to use Quicksilver for.


End of an Era

October 27, 2008

The hard thing about keeping a job in the technology field is that it is constantly changing. Just this past summer $WORK fired several mainframe workers who could not keep up. They got stuck on one technology that they knew how to operate, and failed to evolve when the field did. Now I think its clear that another sector of the job market is on its way out, the one that I, and thousands of others occupy, the job title of systems administrator.

There are three technologies that I believe will bring a major change to the sysadmin field.

  1. Virtual Appliances. Small, single-purpose virtual machines that provide a simple web interface to configure the hostname and IP address are called virtual appliances. They are normally bundled to provide a specific service, like running a Wordpress install, which requires a web server, php, and a database. Just a few years ago, building a LAMP stack to run this took at least a passing knowledge of the technology, but with a virtual appliance, no knowledge of the underlying applications or operating system is needed. The developers simply build the operating system the way they like and hide the complexity behind some clever web interface. I do not believe it will be long before the application vendors themselves are selling their applications as virtual appliances. IBM could sell DB2 and WebSphere appliances, Oracle could sell their own appliance for their database, maybe a SAP virtual appliance. The idea is that the operating system will no longer be of consequence, thanks to the availability of Linux and other open source tools. If the licensing issues with Linux becomes a problem, there’s always BSD. With Darwin powering everything from hand held phones (iPhone) to powerful super computers with the Xserve, Darwin may well be the base of choice for building commercial virtual appliances. Of course, if neither of those options work out, I’m sure Microsoft would be more than happy to license their OS… for a nominal fee, of course. The point is, the OS will not matter any more.

  2. Amazon EC2. Cloud computing is a big buzz word these days, but I think the smart thing to do is to stay away from the buzz word and take a look at the technology behind it. As it turns out, the virtual appliances I mention above are only half the story. The other half is, and always will be, hardware. If you need hardware, you need someone to take care of the hardware, and therefore, you need a sysadmin. However, with Amazon’s Elastic Cloud Computing (EC2), a small to medium sized business no longer needs to invest thousands in infrastructure to build a world class data center. They simply use Amazon’s data center, and pay for what they use. Amazon has put a lot of thought, time, and money into their data center, more than any but the largest of businesses. What they are building is disruptive technology, and the traditional datacenter is being displaced. I wonder how long it will be before someone starts offering small to medium sized businesses the option to run their entire data center on Amazon EC2, with virtual appliances running in the background supplying the desired services. I also wonder how long after that business starts that it will explode.

  3. Google App Engine. The App Engine is a little different from the combination of EC2 and virtual appliances, but still just as disruptive. Imagine a business offering a particular service over the Internet, they want to be able to scale when needed, and they want very little cost at the beginning. If they write their application in python, they have it made. The App Engine will host the application on Google’s infrastructure, giving it the speed, redundancy, and ability to scale that it needs to compete. With the App Engine, the college student in his dorm has just as much ability to build the next great thing as the multi-billion dollar business investing hundreds of thousands into its own infrastructure. Any web accessible service could be written in the App Engine, and be instantly available to scale as needed. This means that the company only needs programmers… no hardware, no operating systems, no fault tolerant highly available network infrastructure, not even a few web pages to set up IP addresses. No need for a sysadmin at all.

Many of the issues that companies have with the technology mentioned above now has to do with trust. I think that over the course of the next five to ten years, the technology will mature, and the trust will be earned. For a business, hooking up their infrastructure will come as easy as filling out a form online saying what they want. It will come as second nature as hooking up phones and Internet access. I think everything could come as a virtual appliance. From common needs like email and file sharing to more complex needs like Content Management On-Demand, everything could be run from a virtual appliance. The vendor installs the OS, makes sure it is configured just right, installs the application, does the same for it, packages it up, and makes millions.

This makes the career field of systems administrator much smaller, but it also opens up new opportunities in designing these appliances and being a part of the transformation. I see a new field emerging for virtual appliance designers and integrators, a field that could operate more like independent software vendors do now. I’m considering it myself. Of course, I still have a book to write, and possible grad school to go to, and a family to raise, but hey… life is short!


Poor Web Apps

October 16, 2008

I just spent the past 10 minutes trying to get this article I wrote for BrightHub posted using their online writers application. I’m not sure what language they are using, or what platform the site is running on, but I tried Chrome, Firefox, and finally, out of desperation, Internet Explorer 7, all with the same results. Almost every time I would try to preview the article before submitting it, the application would wipe out everything I just typed. Everything.

That means that I’d have to copy and paste the article into the app again, and screw around with formatting and making sure the links were there. When the app did not wipe everything out when I previewed it, it would wipe it out when I tried to add meta information or assign it to a category.

This was driving me nuts. Thankfully I had pasted the article into Gmail before hand to spell check it properly. So, Gmail automatically saved the article as a draft and it was very easy to copy it back out of there and back into BrightHub.

I know that they are probably making improvements all the time to the system, but I certainly hope that I don’t run into this particular bug again. Good grief… move to wordpress.


New Apple Hardware

October 14, 2008

There’s a lot to think about in the new MacBook line. The aluminum MacBooks are unquestionably what I wanted in the white MacBook that I have now: sturdy construction, fast graphics, and a sleek powerbook look to them. Apple has further blurred the line between their professional and consumer line, something they started with the aluminum iMac, continued with the MacBook Air, and now have completed with the MacBook.

Photobucket

Wow, nothing makes me want to max out my credit card like a Stevenote with some really great stuff.

The one thing that I would say I question about the new MacBook line is the glass trackpad. Apple has finally gotten rid of their one-button mouse… now there is no mouse at all! There is no button to press for either right or left click. Now, the entire trackpad is a button, and you can use multi-touch gestures and press down on the trackpad to click. I told my wife about this, she rolled her eyes and walked off. She, my ideal “everyuser”, decided to get a PC last month, and the lack of two buttons on the current MacBooks was one of the reasons. I believe that will take some getting used to.

It is interesting to note that Apple, after pioneering the mouse and the GUI back in the 80’s is now pioneering moving away from the mouse by integrating multi-touch devices into their product line. I wonder how long it will be till they release a multi-touch USB trackpad for their desktop line.

Well, I could blather on about this all night, but I’ve got work to do…