VMWare Buys SuSE, What Now?

It was just announced that VMWare bought SuSE on the WSJ (paid version).

WHAT DOES THIS MEAN?

This deal included the cloud, but not any of the other OES layer. I had been a major supporter of the idea of having the OES (Open Enterprise System, or Other Expensive Shit) software as an add on and I thought it worked really well in that respect. I really didn’t like eDirectory in particular, I thought ZCM was junky and I thought most of the other novell products were trying to find some identity between legacy support and being similar to but not microsoft. eDirectory never aspired beyond being Active Directory with XML support. The driver set thing was interesting, but otherwise poorly implemented. The shared storage services were never particularly impressive and just ended up half assed compared to ZFS. The whole “similar to but not” thing extended into the depths of the distribution also – SuSE really is RedHat under the hood and I’ve made RedHat boxes run OES. Is it easy? No. Can it be done without breaking the OS or the repositories? Yes. My guess is there’s going to be a brief market for this and then it’s going to go away.

The IP going to VMware is the core OS and the cloud. This has two plusses for VMware. One of them it that SuSE has a nice gui. VMWare doesn’t want to be linux, it wants to be a GUI for managing VMs. The cloud thing is a natural since now you don’t have to provide storage, you can simply run your VM “in the cloud”. I personally think the cloud is a poor fit for VMware, but maybe they’ll do something cool with it.

AttachMate is buying the rest of the IP, including ZCM, which was the only profitable sector there. AttachMate does UIs for mainframes and legacy systems (including Unisys and I did lol) and really has no interest or use for 90% of the OES suite. They’re going to simply make connectors to the netware terminal. All the rest of the Netware software is likely to go away.

So what’s the silver lining? For one, mono development is DOA. Thank god. Mono was Linux’s bridge to .Net and it never worked well. For two we get rid of webdav alternatives.

What’s the downside? We lose out on YAST, which was probably the best in terms of bringing ease-of-use to linux. Ubuntu used it under the hood. The question becomes which side we want to be on – Ubuntu or RedHat? On one hand, RedHat developed most of the code which makes things like YAST run. On the other hand Ubuntu took it and make the UI pretty. RedHat dumps pretty for flexible and robust, but Ubuntu trades up some flexible and robust (and secure) for ease-of-use. This is a tough choice.

What of all the patents? Attachmate will own Linux as a trademark defacto, so it won’t surprise me to see Attachmate sell this to Microsoft whole or in part. I think there’s a strong argument for this being anticompetitive, but MS has a pile of money and lawyers and every reason to try to tie up RedHat who has effectively zero competitors now.

Leaving Novell’s SuSE Linux Enterprise Server

I was trying to ruminate how to write this so it wasn’t an out and out hit piece on how badly Novell shot themselves when it comes to running Linux. Then I realized that this was the company who brought us netware, and they couldn’t ever get ahead of it. Once I realized this, I understood the fundamental truth of the issue: Novell has always stood in Microsofts shadow and this is why they never achieved greatness.

Netware always ran on top of DOS. They were inseparable even as OS2 ran on top of DOS. DOS wasn’t even particularly nice, but the selling point of DOS back then was that it wasn’t UNIX. And it wasn’t even that it wasn’t UNIX in the sense that UNIX was unpleasant to use – UNIX was a great write-once-run-anywhere example with POSIX (for the most part) – it was that it was so darned expensive. The rise of Linux has been documented ad infinitum on this blog and elsewhere. If you’re unclear about it, grab a copy of the absolutely great Revolution OS and watch it. It’s not as much about politics as The Cathedral and Baazar but more about the people involved and their motivation in the face of absolute commercial adversity.

Lets consider, for a moment, the state of a Linux company as a whole in the present. Windows 7/Windows Server 2008 (I’ll just call it “Win 7”) finally has threading and user separation which is actually worth something. Windows 7 scripting is still a crapshoot with power shell but it’s vastly better than it ever was. People are finally starting to take .Net seriously since the Win 7 threading stopped sucking. It’s still hideously expensive to run, but it’s got the critical advantage that people are generally familiar with it. I know most of my readership runs Linux, bear with me, I run it too. When we say that Linux has better, more robust filesystems, this is true. When we say Linux is typically faster, this is true. When we point to Linux and we say that it’s more secure, this is true. The problem here in the present for a Linux company is that Windows 7 is probably good enough for most people. MS has put something out which is so good it raises the bar, and people who were not terribly happy with Linux due to their vendor might take a moment and say “Well now performance is similar for my specific purpose, lets give it a try”. I’m looking at you, Novell, because your tech support sucks, and this is coming from someone who’s been running Linux for 10 years and saw how badly Caldera’s support sucked.

What happened to Caldera? Novell bought them. Sigh.

When the company I worked for decided to shove off Novell’s SuSE, this is exactly the reasoning. Novell’s support sucked, in turn performance was marginal but vastly poorer than their marketing material would have shown, I suggested going to RedHat and the company ultimately decided Windows 7 was “good enough”. There is now balkanization where various departments are spinning off their own IT groups who were happy and satisfied with SuSE, but these IT groups are running OpenSuSE and they’re not using the Novell proprietary services. They would be every bit as happy on RedHat or Fedora or Debian or Ubuntu… as they would with SuSE. To them it doesn’t matter what Linux they’re running so long as the old faithful chugs along and dishes up their applications. To them, Linux is “good enough”. To my group which has to do things like directory administration and file sharing, Novell was a serious problem on whatever OS we ran it on (including XP clients to eDirectory which often crashed or did weird stuff when java was updated) and the new problem was that it wasn’t even “good enough”, it was totally blown out of the water.

Lets take a step forward and get out my Penguin Crystal Ball. Novell’s in trouble because they’ve allied themselves as a partner with MS and touted their AD compatibility. The problem was they did this before Win 7 really got a foothold out there and now the Big Push from MS was Windows 7 as a server OS. Now Novell, once again, finds itself competing with MS in MS’s own ballpark. This is from a simple technology perspective, nevermind that Novell only recently fixed up eDirectory to AD support to make it robust. From a money perspective it’s a no brainer – the cost of the license + the cost of support is about what you would pay for a similar amount of performance from MS. I’m not going to say they lied here, but the performance numbers were definitely padded in my opinion and it only got worse once the virtualization craze hit because it was even slower. RedHat here is a great example of doing this correctly – the price is competitive and the numbers are correct, but more on the point RedHat understands that MS is the Big Dog in the neighborhood and RedHat’s claim to fame is that they serve as AD replica servers flawlessly. Now you have a MS product which is fully supported, but if you have a branch office that doesn’t need all the bells and whistles, you can throw a RedHat Enterprise Linux server out there for $100 and serve up a full replica of AD. You can’t even buy Windows 7 for $100. Redhat’s other great idea – It doesn’t care if you’re a Mac or an MS client. It can serve up the domain and filesystems wholly transparently. Try joining an AD domain with an Apple sometime, see how that works out for you.

Novell, as a company, I am fairly sure will persist. There’s a lot of people such as ourselves who have legacy applications which run on Netware but want some bridge to the future. There’s also a place for companies right now who do Linux distributions because Linux’s kernel is going through growing pains at the moment with regards to hardware and “kernel module loaders”. The question is – how long can they hold on with both Apple and MS going for two different market segments? Apple is quickly becoming the defacto desktop to run for people who think buying a new computer is the solution to computer problems. They made a great choice putting a pretty face on the good old UNIX workhorse and they weren’t so vain as to make broad sweeping changes to POSIX (looking at you, Novell) or hide the command line. It even runs Linux software almost 100% so it has a wealth of applications. Win 7 in this respect is too little, too late. However, on the server side, Windows 7 is just what it needed to be to compete with UNIX deployments. Java, threads, scripting, POSIXy stuff and great privilege separation are all there. If I were Novell, I would be doing some serious soul searching.

If I were looking for a new way to update my infrastructure, I’d probably give Windows 7/Windows 2008 a try and put RedHat into service as a performance enhancer for my new, shiny system.

Update: In case anyone is wondering “what do I do if novell tanks?” – you can install OES (the Novell enterprise software) on top of RedHat. It works perfectly with a bit of librarly versioning work, but it can be done and it does run correctly.

nVidia fix for KDM not starting

If you’re like me and you updated your Linux desktop to find Xorg (because of nVidia) no longer played with XDM, KDM, or GDM then you’ve hit the hilarious ignoreABI bug.

For some people, editing their /etc/X11/xorg.conf works:

Section "ServerFlags"
	Option		"IgnoreABI" "True"
EndSection

Not me. No matter what flags I tried to pass in via xorg.conf, it wouldn’t go. Probably because sometimes the GUI is looking for the configuration, othertimes it’s not. According to /var/log/kdm.log it was, but it cowardly refused to honor that directive.

The fact that the ubuntu guys call it a “good driver” just means the typical ubuntu user has no idea what a good driver looks like.

Finally someone with half a brain ran into this on Xorg 1.5 and their new nVidia card. You can read their fix here, it’s basically what you would expect. Shim out /usr/bin/Xorg with a script, have the script call the other Xorg.0 executable and pass all the arguments with -ignoreABI. Incidentally something really strange is happening here because editing /etc/X11/xinit/xserverrc to add -ignoreABI to the args line doesn’t get passed to Xorg.

I Hate nVidia

I have a dirty confession – I’ve always liked ATI stuff except for when the GeForces first came out and they were cheap as heck. Buying two of them would buy you a high end 3DFX card or ATI card and it would still outperform them. Also these were the college days when AGP was still new and having two videocards meant one less thing to kill your PC. Then again this was drexel, and we had the Kelly Hall heat wave that year, and it killed my motherboard.

ATI fought and fought hard to get back to the top, and it was only after AMD bought out the last of the DEC stuff for really awesome 64bit support and then gobbled up ATI that things got good again. Frankly it was a great move since graphics are almost all math, so having a 64bit (or even 128bit) pipe with multipath and short-lines is just great.

The came the licensing wars.

Linus (correctly) said that kernel shims were OK so long as they’re open source. He’s no dummy, kernel shims let the kernel load blobs, but being open source they replace the linking and once you’ve got the linking objects you’re most of the way to having a driver since you can see what the card is being sent and you can see what the kernel is sending. Open source drivers followed, but some of the really exotic stuff only recently caught up.

nVidia has always, always been a pain in the ass in Linux. The shim wouldn’t build when it first came out and required users to edit the Makefile, certain gcc versions produced drivers which were slow or had unintended consequences depending on how they did memcpy and other low level functions. Installing nVidia was mostly a one way ticket to either kernel lock-in or building it by hand. To further add insult to injury, nVidia never offered a unified driver and always had three versions. Now this was OK up until recently – They kept a list of cards so you generally knew if you needed nv, nvidia-G01 or nvidia-G02. Now the bad news: nVidia has decided to drop updates for older cards. I realize they can’t update them forever, but what’s missing? Open sourcing the drivers.

ATI hasn’t really offered up any open source drivers, but they did offer unified drivers. Download one, build it, you win! The build process is pretty seamless. ATI hasn’t moved to quash open source drivers either, to the point where the open source drivers are so stable that they are now officially merged to MESA. If you’re wondering what MESA is, MESA provides openGL functionality to the system in a common package. To have drivers in it for a major manufacturer like ATI means you simply install MESA and 3D just works. No more diddling around with drivers, third party crap, and the ATI clock tray icon (unless you want to).

Now if you’re like me, you’re running OpenSuSE. You’re probably not like me but you might be running Linux. Windows users should have stopped reading six paragraphs ago. I upgraded to 11.3 from 11.1 (which I needed to run to hack the novell client from SuSE 10 into working because novell doesn’t even update their own stuff) and what broke? Oh, the nvidia drivers. Given that this is a work PC, I have no sway in my videocard. I went to fire up sax2 and I was told it was deprecated because of XOrg updating their autodetection routines. The new XOrg is nice, the new SuSE is nice, but with no new nVidia release my KDM login manager doesn’t work. Weirdly enough I can log in on the console and do a startx which does work, but it would be nicer to have a GUI running. (Then again having an ominous text console keeps the n00bs off my PC). After hacking on this most of the last few days, it’s definitely a problem in how nVidia does the initialization and it’s directly related to the fact I am running nvidia-G01. Way to go nVidia.

My laptop (ATI)? Runs great, and it’s a radeon mobility 600. Hardly new. Guess we know who’s videocard I’ll be buying in the future.

X and sudo

For some strange reason, ubuntu is really misbehaved when it comes to preserving environment variables. The normal fix to running X applications over sudo is to edit /etc/sudoers and make sure that DISPLAY and XAUTHORITY are in the preserved variables list. Log out, log in, it’ll work.

Ubuntu, for whatever stupid reason, absolutely refuses to preserve these variables. Granted here at work we run novell’s SuSE and I too run SuSE. When the Ubuntu guys (both of them) have a problem, it’s sometimes hard to troubleshoot especially when I have to say “It works for me”.

Here’s the deal. If you’re having problems making X applications work over sudo, either allow people to sudo bash to open a root shell they can work in or script the following:

xauth merge /home/USERNAME/.Xauthority ; /sbin/yast2

Obviously substitute username for the user having the problem and /sbin/yast2 is whatever you’re trying to run in X as root.

You guessed it, $USER doesn’t work because Ubuntu doesn’t export $XAUTHORITY or $DISPLAY either.

OpenSuSE 11.1 is full of win, but new ATI drivers, maybe not

Phortunate the Phoronix Phaggots knew what to do. ATI has had a horrible problem with their drivers in 64bit mode since day one. I’m working on a Dual Core Pentium 4 64bit with an ATI x600 Radeon card here. OpenSuSE 11.1 is a dream but has it’s share of bugs. Ah, the joys of operating system maintenence. Anyway, turns out that whatever you do with the ATI drivers, you end up with /usr/lib/dri. This is full of fail. You want to remove this directory and link ./usr/lib64/dri to here.

Special thanks to the Phoronix Crew.

Linux Arcanum and SMART Warnings

If Languages Were Religions is riotously funny.

I finally figured out what’s wrong with my desktop. For the longest time the instrumentation was just weird. It would crash randomly, have strange bus problems (which I thought were related to aging video cards), and the voltage from the power supply would have a noticeable bit of noise from it. Other than the generic logs of “your computer has recovered from a serious error” there was nothing to point to. MEMTEST would show all the DIMMs had a bad line, so I just assumed the mobo was slowly dying and figured one day I would come home to it not working.

Finally one day I happened to be reading the syslog on my Linux box trying to track down this one idiot on a modem who was trying to hack it when I got the message:

Dec 17 08:29:39 HopsAndBarley smartd[2532]: Device: /dev/sdb, Failed SMART usage Attribute: 9 Power_On_Hours.

OH MY GOD SMART ACTUALLY WORKED. Basically it’s saying my old Linux drive, the one I use all the time, is crapping out. I checked to see where the spare was and realized that the spare became the windows drive (120GB) and my windows drive became my Linux drive. The spare-spare drive I had is a 10GB drive I used to use as a raw device for caching DVD data while authoring. Which means I have no device at all. So I have a choice. I can go through my windows drive and reload it, thus creating enough space for a Linux partition or I can run the computer without the Linux drive entirely and give up my primary OS for the sake of having anything to use at all.

Since the botnets have been a pain recently I came up with a new /etc/hosts.deny

ALL : .ru
ALL : .cn
ALL : UNKNOWN

Basically, if you’re from a .ru, or from .cn, or your IP doesn’t resolve to a hostname, you’re not connecting.

And of course all the other security stuff is in place like denying root login, which seems to be what most of the idiots out there are after.

Here’s the types of logs:

Dec 16 13:12:03 HopsAndBarley sshd[2917]: Invalid user t1na from 195.162.62.230

These actually go on for quite a few usernames and the guy’s working off a default list. These will now be denied outright by TCPWrappers since they’re caught by hosts.deny’s UNKNOWN directive.

Oct 24 20:19:11 HopsAndBarley sshd[9074]: Invalid user newsletter from 59.145.145.146
Oct 24 20:19:14 HopsAndBarley sshd[9079]: reverse mapping checking getaddrinfo for dsl-kk-static-146.145.145.59.airtelbroadband.in [59.145.145.146] failed – POSSIBLE BREAK-IN ATTEMPT!

That asshole is from india. I’m trying to decide if I want to blacklist India from connecting to me except that I have Indian friends. I simply set my SSH max auth retries down to 1 and set the “connect” timeout to 5 seconds making it prohibitively expensive time-wise to try this crap.

And finally this poor asshole wins the award:

Dec 8 14:52:29 HopsAndBarley sshd[15188]: Invalid user felix from 62.141.122.246
Dec 8 14:52:31 HopsAndBarley sshd[15193]: reverse mapping checking getaddrinfo for dial-up-1-118.spb.co.ru [62.141.122.246] failed – POSSIBLE BREAK-IN ATTEMPT!

Because it took him so long to connect, he was at it for over 12 hours.

Now, stuff that makes me less happy is that this is OpenSuSE. I love SuSE, it feels like RedHat Done Right. But some of their default security settings aren’t appropriate to a desktop system. I realize there’s probably times where having UNKNOWN hosts denied access would ruin someones websurfing experience, but having SSH respawn indefinitely with no delay or max auth retries is sloppy. On the other hand, OpenSuSE and SuSE in general is really good at not spawning services it doesn’t need, and the default firewall for a desktop host is really restrictive (actually, it denies all inbound traffic without matching outbound traffic). It’s OK to have this as a point defense, but in todays age of browser based exploits, it wouldn’t surprise me in the least to find out someone starts killing Linux desktops by connecting to localhost once they have your browser. A firewall is nice, but defense in depth is a requirement.