jb… a weblog by Jonathan Buys

Cocoa

January 24, 2009

Last summer I decided to dive into programming and give it everything I’ve got. For months, I would get up at 5 and pour through the Hillgass book, every page, every challenge. I’d revisit it at night, after the kids were in bed, and code till I couldn’t stay awake any more. I finished the book, and started coding my first real application. It was then that I found out how very little reading one book gave you. Hillegass says he gives you 80%, I’m thinking it’s more like 50%, tops. I reached out to the Cocoa Dev mailing list for help, and eventually even went to a CocoaHeads meeting with the local chapter. Meeting some “real” developers, I despaired. There was far, far too much that I didn’t know, I felt that it would take me an entire lifetime to learn what I needed to know. I gave up, wrote Goodbye Cocoa, and focused on something else.

Months passed by, and I began to rethink my initial approach to development. It became clear to me that I was trying to accomplish too much, and that I needed to start with something very small. At work, I wrote a small shell script that I use to ssh to the servers. The script was very small, and very, very handy. I called the script “go”. To use it, I just type go servername and I’m there. I decided turning that script into a gui cocoa app would be a great practice, a great learning exercise. So, I downloaded Xcode again, and fired it up.

This is the cocoa Go. There are sheets for default preferences and individual host details. The idea is to stay as close to the keyboard as possible, so all of the buttons are bound to the standard key combos. CMD-N creates a new host, CMD-, shows the preferences sheet, CMD-i shows the info sheet, Delete removes a host, and return launches Terminal with an ssh session to the currently selected host open. That’s Go in a nutshell. You can add hosts, and set up easy ssh to the hosts. The green light at the left of each host will (eventually) show if you can ping the host or not. I have the ping code down, and I have the code that shows the light down, I just need to tie them together in an if/then loop, and then set that loop inside of a timer to update every so often.

There is a lot of spit and polish left to add to Go. Adding a decent “About Go” panel, adding some nice looking help, and creating a decent status bar menu for it to live in for starters. I’d also love to add some of the really nice Core Animation stuff to the main Go window… just because its there. Getting Go to where it is now has been a lot of fun, and someday it may even be ready for public consumption. One of the first things I did with Go was to put it into an online Subversion repository, which I host here. Another thing I did very early on was to add the Sparkle framework for auto-updating, since I’ve got a friend in Maine who tests Go out for me on a fairly regular basis. After adding Sparkle to it, I followed these instructions to create an automated workflow to compile Go, create a zip file of it, digitally sign the zip archive, upload the archive to my server, and update the appcast xml file. An awesome amount of automation.

Adding the ability to ping was a challenge. A few of the forum posts that I saw said that there was no way to do it, and since you had to open up a raw socket to do it, you’d have to have root privileges. That was obviously not going to fly, but I couldn’t believe that there wasn’t already an existing cocoa class to do it. After much searching, I found SimplePing, buried in the Apple Developer site. I downloaded it, imported the class into Go, and gave it a shot. Calling SimplePing seemed to work! It would ping the host when I called it. Unfortunately, even if it tried to ping and could not reach the host, it still returned a return code of 0. That was not going to do. So, looking through the C code of SimplePing I found this:

if (error == 0) //was able to wait for reply successfully (whether we got a reply or not)
{
    if (gotResponse == 1)
    {
		{numberPacketsReceived = numberPacketsReceived + 1;}
	}
}

which I changed to this:

if (error == 0) //was able to wait for reply successfully (whether we got a reply or not)
{
    if (gotResponse == 1)
	{
		{numberPacketsReceived = numberPacketsReceived + 1;}
	} else {
		error = 1;
	}
}

and all was right with the world again.

Adding the ability to launch Terminal and ssh to the selected host was also an interesting challenge. Apple provides a framework to add the ability to manipulate other applications through AppleScript, called the ScriptingBridge.framework. To actually be able to call the Terminal, I needed Go to know what AppleScript Terminal would accept. So, I ran this command: sdef /Applications/Utilities/Terminal.app | sdp -fh --basename Terminal --bundleid com.apple.Terminal This created a header file “Terminal.h”, which I was able to import into Go. From there, I created a “ScriptController” class, where I have this piece of code:

+ (void)runTerminal:(id)sender withUserName:(NSString *)userName andHostName:(NSString *)hostName;
{
// Create our terminal object
TerminalApplication *terminal = [SBApplication applicationWithBundleIdentifier:@"com.apple.Terminal"];

// Get selected host name and IP address from NSTable
NSString *script = [NSString stringWithFormat:@"ssh %@@%@", userName, hostName]; 

// Run the script
[terminal doScript:script in:nil ];
[terminal activate];

This allows you to pass a user name and host name to this class, which will create a ssh script, and pass that script to Terminal via AppleScript.

Since it was fairly easy to add this ability, I’ve been playing with the idea of also adding VNC capability to Go. Leopard includes a Screen Sharing application, which is based on VNC, so I’m hoping to be able to script it in a similar way. I also looked into Microsoft’s Remote Desktop Client and CoRD, but neither allows AppleScript, so it would be very difficult to tie them into Go.

Go is a “document based” application. That means that you can create different “sets” of hosts, and save those sets as individual files. So, if you are on a team, you can share your set with the rest of your team.

The concept of Go is pretty simple, you have a set of places, and you want quick access to those places. I’m toying with the idea of adding a lot more capability to Go, like storing bookmarks, Finder folders, ftp, sftp, etc… It will be interesting to see how it goes. Right now, I’m just trying to get the basics down. Programming is perhaps the most challenging thing I’ve ever done, but I’m glad I didn’t give up for good.


The Sorry State of Enterprise Software

January 22, 2009

I’ve been unlucky enough to be working with quite a few pieces of so called “enterprise” software, the worst of which I’ve been working with lately is called the Tivoli Workload Scheduler. TWS is, at its core, a glorified cron. It is a scheduler, you can create jobs, or scripts, and have them executed at given times. You are supposed to be able to cascade jobs, and create dependencies between jobs. This is all well and good, but there are some serious problems with this software.

The first problem is the price. List price for TWS is $33/value unit. IBM bases its pricing scheme on how many CPU cores are in the server that you install their software on, 100 value units per single core CPU, and 50 value units per core for dual or quad core CPUs. So, if you have four servers, and each server has four quad-core cpu’s in them, that comes out to around $26,400. I think we just went ahead and bought 1000 value units up front. That’s a fairly good sized amount, and that does not include the cost of the consultant its going to take to install, configure, and actually use the software.

Why tie the cost of the software to the number of cores in the system? TWS doesn’t use CPU resources to actually do any work, it passes off the work to other applications, TWS simply schedules them to be run. The price would almost be bearable, if the software actually worked. For $26,000 I’d think that it ought to make me coffee and pancakes in the morning. The reality is that after several months of enduring the software, it still doesn’t work properly.

The end user of the system has been trying to add event rules that fire off an email if a job doesn’t end correctly. Wow, that’s like, what… one line of shell script? But, since this is the TWS, we have to put in a call to IBM. IBM will call back, and ask for a ton of information. They’ll ask for directories that don’t exist, ask you to run commands that may or may not work, and generally take up a lot of time. Meanwhile, I’m starting to think that we are actually beta testing this software for IBM, and they just didn’t bother to tell us.

And then there’s the user interface. The UI, like many IBM applications, is quite obviously built on Java, evidenced by the length of time it takes to launch. Once it is launched, there are cascading left to right areas of a single window that allow you to perform separate tasks. At $work, I’ve got a 22” monitor, and this is the only application that I expand to full screen. It needs it. The application, called the “Job Scheduling Console” provides it’s own tabbed MDI interface. It is extremely confusing. Part of the confusion is that evidently the developers decided that there were too many options in the man application window, and chose to add a second interface to TWS through it’s integrated WebSphere application server. The second interface, also Java, is accessed through a web browser. Unfortunately, not just any web browser, it seems to only support Internet Explorer. I tried to access it first through Chrome, which did not work at all, and then through Firefox, which almost worked, but there were pieces of the application missing. IE worked well. The web interface is just as jumbled as the fat client on the desktop. Buttons seemingly randomly placed, some options hidden in drop down menus and others placed either above or below the data.

There is no clear, obvious method to accomplish anything with this user interface.

And that is not all my friends, oh no, that is not all. You must also have access to the command line on the server where TWS is installed. Even on the command line TWS is not a good citizen. There is no man page or online help shipped with the application, you have to load a ton of special environmental variables, and they provide scripts that launch a faux-shell that only accepts certain commands. One such command, conman, offers the ability to view the logs in real time (why, for the love of God, do you not log everything to syslog?), but only if you enter the command “con se” at the conman prompt. Also, you should enter “lev=4” to make sure you get all the logs. Proper logging in an application can be a lifesaver, and it could have been an area where TWS could redeem itself somewhat. That is not what has happened. The “con se” command only works sometimes. Other times it simply says that it submitted that command to be processed and returns you to your prompt. Great, thanks… so where’s my logs?

Having multiple interfaces to the application is fine, if you could accomplish everything needed in any one interface. However, that is also not the case. You need all three, and the end user must switch between the web interface and the fat client, and I as the administrator must switch between the web client, the fat client, and the command line to try to coax this monster into doing what it is supposed to do. Which is… schedule jobs. That’s really all this is supposed to do, schedule jobs to run. I don’t think it should be this hard.

Take these points into consideration in the light of the cost of the application. Now, let your jaw slowly close and realize that IBM can charge this much because it has found a market that no one else is tapping. TWS is only one example of horrible “enterprise” software, there’s a lot more of it out there. Personally, I see an opportunity here. An opportunity for well thought out, beautifully crafted software that works well, is easy to use, and gets the job done.


Visual Thinking

January 12, 2009

The ability to visualize a complex system is key to a real understanding of it. To me, this applies to computers and technology. To Daniel Tammet, this applies to numbers and letters, on a far, far more vast scale. Daniel is a savant, an extraordinary person who pushes the boundaries of what we believe we know about human capability and learning. What he has to say rings a bell, because how he views his numbers and letters is similar to how I understand the inner workings of a computer. I can visualize what is going on, or what I want to happen. I can do this because I’ve worked and studied the computer field for years, and because I enjoy the work. What I’ve found is that there is a distinct difference between understanding the technology and memorizing what happens when you click a button.

If you work in the tech industry, memorizing technology is a bad idea. Technology changes, it evolves and grows. It is better to understand the attitudes and purpose of the technology. Networking is a great example. I started out learning about wave propagation theory in the Navy, and about how we could get data from the size, shape, or velocity of the wave. Later, when I went on to learn TCP/IP networking, I found that the data was still transmitted the same way, as 0 or 1, but how it was decoded was different. Then the stack, then the applications, then scripting, and now, I’m learning a high level programming language, and its finally starting to click.

The thing is, I couldn’t have learned these skills if I didn’t have a visual image in my mind about how it worked under the gloss of the computer screen. I can’t imagine trying to use a computer, much less program one, without at least a passing knowledge of what happens when you click that mouse.

Then again, to 99.999% of people, it doesn’t matter, and really shouldn’t. Computers should be so easy to use that you don’t have to learn a new skill to use one effectively. They should be as self explanatory as toasters. Unfortunately, they are not. Windows 7 is coming out this year some time, and it will sport a new user interface which its users will have to learn all over again. Many, many of them will simply try to re-memorize what button does what, and what order to click things in. You shouldn’t have to learn why the computer works the way it does, but it certainly doesn’t hurt. In the end, it makes things much easier too.


New Years Day

January 1, 2009

I love January 1st. It’s not that this day is really any different from any other day, other than having the day off. Its the mental association that goes along with the start of a new year. The calendar restarts, and it seems like a fresh clean slate to build the rest of the year upon. I like resolutions, and I kept mine last year, so I’m going to try to keep another one this year. Last year I resolved to learn how to swim laps, and I did. I still swim fairly regularly now, and am actually looking forward to increasing how often I go swimming.

When I return to work tomorrow, there will be no acknowledgment of the new years significance, but it will be in my mind. A fresh start, a clean slate, and another chance to build my life to be whatever I dream it can be.


Songbird Media Player

December 29, 2008

Songbird is a very young product with a very bright future. The Mozilla based media player has come a long way since first releasing a beta, unfortunately to unseat the ruling titan iTunes, Songbird still has a very long ways to go. Songbird is open source, packed with features, and seemingly infinitely expandable through various add-ons and web integration. Also, like the rest of the Mozilla suite, Songbird is cross-platform, a point that becomes glaringly obvious the moment the app is launched.

First the good points. Songbird is a lot of fun! On first launch, Songbird asks if you would like to import your existing iTunes library. I was able to import mine without a problem, including DRM’d tracks and movies. Songbird cannot play the movies, but it has no problem playing your iTunes purchased music. The mashTape extension gives quick and easy access to information from across the web concerning the currently playing track. It will pull up the artists biography, discography, reviews, news, photos, and videos. Another interesting extension is integration with the SeeqPod website. Selecting the SeeqPod Search item from the left pane allows you to search the Internet for songs, and then download them to your library with a right click. The search is a bit slow, especially when Songbird attempts to verify all of the links. Certainly a lot of fun, the SeeqPod extension seems to be of questionable legality.

Songbird has cleaned up their interface quite a bit, and with the exception of the play and volume controls being at the bottom instead of the top, the default “feather” (or skin) looks fairly similar to iTunes. As a matter of fact, there are a couple of iTunes skins for Songbird, but they either were not compatible with Songbird 1.0, or looked nothing like the real iTunes. The Mini Player view is reminiscent of some of the older media players, a very compact interface that I really enjoy.

For all the fun Songbird is, iTunes will remain my media player of choice for now. While Songbird does makes advances with its interface, no skin will replace not having a native mac UI. Something in the way the app feels while using it gives it away as being a port of a “multi-platform” application. A small and rather insignificant example of this is how the “About Songbird…” menu option opens a sheet covering the library with the software license agreement. A normal mac citizen would have another window, most likely with the applications icon centered towards the top, followed by the applications version, and then maybe some other interesting facts about it. A quick look through my other open applications (Mail, iChat, Safari, TextMate, and Yojimbo) shows a consistency that Songbird lacks. This is a small example, but one that shows the lack of consistency with standard Mac UIs that is pervasive throughout the application.

Songbird also seems to consistently take up quite a bit more resources than iTunes. Songbird uses approximately 20-25% cpu during playback, and ranges from 100 - 150MB persistent RAM size. Compare this to iTunes which uses between 4 - 7% cpu and 80-100MB of RAM. Songbird takes longer to launch, and adds to the resource utilization as more add-ons are installed and used.

Songbird seems like an application that missed its mark. With all of its web enabled functionality, it still cannot perform some basic tasks, like playing a movie or video file, or importing a CD. Given that this is a 1.0 release, I’m looking forward to some great things from this project. Once an open source app gains steam, advances can be made very quickly. I’d like to see a faster start time (how about one bounce in the dock?), a native user interface, and a lot of work done on the back end to reduce resource utilization. Given that there are really not a lot of players in this market, I’m excited to see an iTunes competitor. I just hope that Songbird’s goal was not to be a poor iTunes clone.


Merry Christmas

December 25, 2008

“And the Grinch, with his Grinch-feet ice cold in the snow, stood puzzling and puzzling, how could it be so? Christmas came without ribbons. It came without tags. It came without packages, boxes or bags. And he puzzled and puzzled ‘till his puzzler was sore. Then the Grinch thought of something he hadn’t before. What if Christmas, he thought, doesn’t come from a store. What if Christmas, perhaps, means a little bit more.”

—Dr. Seuss (1904-1991); writer, cartoonist


MobileMe is not a Blogging Platform

December 24, 2008

I thought I’d try OSZen on MobileMe yesterday, to see if I could consolidate even more of my online accounts. Unfortunately, the limitations of both iWeb and RapidWeaver became quickly apparent. I pointed 1and1’s DNS servers at MobileMe, and uploaded an iWeb site. I liked the theme, but the first thing that struck me as odd was the URL. In iWeb I configured the site’s name to be OSZen, and to use the Blog page as the home page, but the URL turned out to be http://oszen.net/OSZen/blog/blog.html which for the home page was just ridiculous.

Next, I changed the name of the site to “writing”, and wound up with a home page URL of http://oszen.net/writing/blog/blog.html. Again, ridiculous. I’m assuming that this is Apple’s way of allowing more than one iWeb site, but I wish that it had better support for personal domain names.

The other thing that really bothered me about iWeb’s blogging engine was how each new blog post was formatted with the default theme’s pictures and text. Meaning that for each post I had to delete the picture and change the layout of the page. This equates to worrying far more about how the site looks instead of concentrating on writing.

These two small items were all it took for me to switch the DNS setting for OSZen back to 1and1, and back to the comforting ease of Wordpress. The great thing about Wordpress is that it’s meant for writing and self publishing, and it does it very well. The more recent ability to update itself and it’s plugins only adds to the ease of maintaining your own Wordpress installation. Also, I’ve got MarsEdit, which rocks, and means that I almost never have to actually get into the admin interface of Wordpress. I do all my writing for OSZen from an application that was designed for writing.

I thought it might be fun, since I’m consolidating all of my online accounts and pruning them down, but the truth of the matter is, web hosting at MobileMe is for sharing your family photos with Grandma back home, it’s not a serious blogging platform.


The Coffee Cup

December 24, 2008

I’ve had this coffee cup on my desk at work for the past year or so now. It’s just a plain white cup, with the Ubuntu logo on it. I got it from CafePress. I loved it, for one, because the Ubuntu logo is great. Best Linux logo out there. I also loved it because as I was thinking about how to solve one problem or another, the cup was normally there with hot coffee waiting to be sipped as I pondered the solutions. Today I picked up the cup, walked towards the coffee pot, and dropped the cup. My wonderful Ubuntu coffee cup shattered as it hit the floor.

I loved that cup, so I didn’t want to break it. However, it seems appropriate, as today I also switched back to Windows at work. I’ve been running Ubuntu as my primary desktop at work for several months, and running XP in VirtualBox when needed. Lately, I’ve been needing the VM more and more, as I do more diagramming and planning in VMWare Infrastructure Client and Visio, both Microsoft centric applications. Also, rumor has it that in the next couple of months we will be replacing our aging Lotus Notes servers with Microsoft’s Exchange 2007. IBM released a Linux native Notes client which supports Ubuntu, and really works great. When we made the switch to Exchange, I was hoping to use the Evolution client that comes with Ubuntu. Unfortunately, Microsoft changed the MAPI standard for communicating with the server in Exchange 2007, and there is no supported Linux client. Which left me with two choices. Run Outlook in my VM, or moved everything back to Windows and conform to company standards. I debated this in my head for a couple of weeks, but in the past three days I’ve had X crash on me three times in Ubuntu. When X crashes, it takes all of my X applications with it, along with the data… it’s like Windows ‘95 all over again.

X crashing for no apparent reason was the nail in the coffin for me. I moved all my data over with a USB drive, and Monday I’ll format the Linux partition and fdisk /mbr from the XP recovery console.

I’ve really enjoyed using Linux, but honestly, it’s kind of relieving to be back in a supported environment again. There are still quite a few desktop tools missing from Ubuntu that are available on Macs and Windows. My current favorite so far is Evernote, with the aforementioned Visio running a close second. Launchy is nice… not as nice as Quicksilver or Gnome-Do, but nice.

Mentioning Gnome-Do brings up another point. Gnome-Do has been acting up lately, catching on something or other and eating up 99% CPU. The developers are aware of the problem, and are working on a solution. However, using Gnome-Do as an example, the very idea of what they are doing with “Release Early, Release Often”, completely goes against the grain of a business desktop. Any Linux desktop will contain beta-quality code, and when I’m relying on a computer to do my job, I can’t have it acting as a beta tester. Ubuntu is doing lots of cool stuff with 3D desktops and cutting edge software, but I don’t need it to be cool, I need it to work. Reliably.

One last note about why I’m not using Ubuntu at work any more. My computer is a Dell laptop, mostly used in a docking station, attached to a 22 inch monitor. I noticed after a while that my laptop was getting really hot in the docking station, and I couldn’t tell if Ubuntu was reading the docking station correctly or if it was displaying on both the internal monitor and the external monitor. When I popped the lid on the laptop, the monitor either came on suddenly or was on the entire time, and the keyboard was hot to the touch. In the Gnome “Screen Resolution” preferences I found that I could turn the monitor off from there, and I think that solved that issue, but I’m not sure. I’d hate to think that I was actually causing the hardware harm by running linux on it. I don’t want to spread FUD, but if its true, its true. When I’m running Windows, I don’t have that problem at all.

So, now I’m looking for a new coffee cup… something to inspire me, and be my companion in my little beige box. Whatever the new design is, it needs to be something that will last, something reliable, and something that’s in it for the long haul. Ubuntu has been good to me, both the OS, and the coffee cup, but in the end, they both broke, and I’ve got to move on.


How to Fix Linux

December 24, 2008

It’s been nine years since I first installed Linux on a computer of mine. It didn’t last long back then, since I actually wanted to use the computer for surfing the web, sending email, and playing games. Linux has come a long way since then, and now it’s a reliable desktop system at work. However, my system is reliable (and enjoyable) because I am a geek, and I know exactly what it needs to make it run smooth.

For example, I use a dual screen set up with Xrandr, and it works well, especially with the new composting ability in Gnome 2.22. Getting xrandr to work properly required me to edit a few lines in the xorg.conf file to allow for a large maximum desktop size. Compare this with how Windows deals with dual monitors. You plug it in, turn it on, enable it in the preferences, and there you go. Now compare that with how Macs handle dual monitors. You plug in another monitor and it works… bottom line.

I realize that there are a lot of valid reasons for this. Proprietary hardware, lack of documentation, legacy code, etc, etc, etc…

I also realize that 99% of people who use a computer do not care.

Computers should be like appliances… plug them in, turn them on, and start using them. I like the toaster analogy. I don’t care how my toaster works. I just want it to make my toast in the morning. I don’t care if I have the “freedom” to take the toaster apart and study the timing mechanisem and re-create it in yet another toaster. I just want toast. I use Linux at work because it is an amazing server. I’ve seen Linux servers keep on trucking through some environments that would kill another OS. I use Linux on my laptop at work becuase it integrates in with my workflow and scripting perfectly. I don’t use it for the freedom, I use it because it works well for my environment. My environment however, is very different from the average home user.

If Linux is ever going to take hold it needs to learn a few things from Apple. Here are a few points that will bring Linux onto Mom and Dad’s desk.

  1. Own the hardware. Apple controls all aspects of its hardware/software relationship. I’m not saying that a particular Linux distro would only be able to be installed on whatever vendors hardware, but I’m saying that a hardware vendor should adopt Linux and fully develop the OS in house to support its hardware.

  2. Manically control the distro. The open source nature of Linux makes this very difficult, but it also makes Linux feel… disjointed. This goes hand in hand with point 1, develop the hardware and software in tandem to support each other. Release the code as GPL, sure… give it back to the community, but control what goes into the distro and develop it in-house. Linux has a bad habit of adding features because it can. Compiz has hundreds of options for rediculous visual features that no one needs (and a few that everyone does need!), and which really distract from using the computer. Another example: Do you think KDE 4.0 would have made it past the desk of Steve Jobs? Can you imagine what that conversation would have been like? Which leads me to my next point:

  3. Appoint someone Grand High Poomba. Someone needs to be there to say “No”. Someone who will say, yea, that’s a great idea, but we are not going to do that in this distro. At times I think that Mark Shuttleworth is that guy… and then there are other times when he decides to ship every six months if Ubuntu is ready or not. Shuttleworth could be that guy.

  4. Provide the end to end user experience. When I pop open the lid of my laptop, I expect to be able to start using it within seconds. When I close the lid, I expect the laptop to go to sleep and wait for me to need it again. When the hardware is fused with the software, and features are controlled and perfected, the result is a very fluid, intuitive experience that brings the users back.

The last item I’d love to see from Linux is to stop adding new features. Seriously, stop adding new, untested, beta quality code and spend some time perfecting what is already there. Fix what you have first. Then, and only then start adding new features.

Linux, we’ve come a long ways, but we’ve still got a long ways to go.


AutoYast

December 24, 2008

I wrote this last year and never posted it. I’m glad I found it and can post it now.

One of the projects I’ve been working on in the past week has been a rapid deployment server for SLES 9. I would have liked to deploy SLES 10, but we are constrained by our application requirements. Novell has done a great job at making it easy to deploy SLES or SLED using their Autoinstall and Installation Server options available through YaST. Using Autoinstall, YaST steps you through the options required to generate an xml file, this xml file is read by YaST during system install and automates the process. To build a network installation source, the contents of the CDs or DVD need to be copied to the hard drive, preserving symbolic links. YaSTs Installation Server makes this easy, and also makes “slipstreaming” (to borrow a Windows term) a service pack into the install source automatic. I’ve built the network install source both ways, and I prefer using YaST to do it for me.

Even with all this being said, YaST (in SLES 9) is still missing some features that require me to edit the xml file directly. The most important feature it’s missing, which they included in SLES 10, is the ability to create LVM volumes during partitioning. Not to say that it’s not possible, it just requires editing the xml source file. Using a little trail and error, I was able to partition the drive with a 200MB /boot (too big, I know), a 2GB swap, and then partition the rest of the drive as LVM, and then mount /, /var, /opt, /usr, /tmp, /home, and /work inside the lvm. Works like a charm. If you need a working autoinst.xml file, you can download mine here.

This setup is great, but it required me to boot off of the CD, and then enter a long install=nfs://bla bla bla/bla bla autoyast=nfs://blalbalba line at boot time. To really make the system work, I needed network booting for full automation. I found a great walk through in this pdf, which surprisingly enough, worked for me the first time. I had to install tftp, syslinux, and dhcp-server rpms, then edit a couple of files, move a couple of things, really no big deal.

Now, I’m ready. When we get 100+ servers in, which I’m told I’ll have 7 days to install, I’ll be able to say “what would you like me to do with the rest of the time?”