Posts tagged with 'Sysadmin'

Wireless Tablet Terror

  • Posted on June 16, 2011 at 1:43 pm

Wireless tablets are all the fashion. I wonder why. I just had to reconfigure a wireless access point to use TKIP/AES instead of AES alone. WPA2 with AES is aournd since 2006 or even before. The end of personal computing is here. My laptop had no problems using the Wi-Fi network. The new gadgets didn’t get a single packet from the Internet.

Thinking positively, at least I got around sliding my fingers on the Galaxy Tab just a like a dog scratching on a door. It was definitely a whole new user experience. No one should miss this. It’s our new way of life.

Ruby on Rails – continued madness

  • Posted on January 16, 2011 at 4:41 pm

I am still at odds with Ruby and Rails, but I know now why the stupid gems are not available as Debian packages. The Ruby people insist on using their Gem as package manager. Debian has already a very capable package manager. Having two package manager is a bad idea. Other programs seem to manage well. Take Perl for examle. Perl has CPAN, but you can build a Debian package out of any module found on CPAN. It remains a mystery why the Ruby Gems refuse to work this way.
It seems fair to warn everyone intending to use Ruby: „Please install this crap from source and forget about your distribution’s package manager!”

Speaking of gems, here’s another one. I have recreated the production environment in order to simulate the intended upgrade path. I have installed Rails 3.0.3 instead of the Rails 2.1.2 package on the production environment. I just want to try what happens. After changing the version number in the environment.rb file of the application I get the no such file to load — initializer error message. Great, a file or some files are missing, but no mentioning of the missing files. Checking the logs reveils that the Rails 2.1.2 gem is missing. Good thing we’re back to the „you’ve got to have multiple versions of the same gem installed, just to feel good about it” mindset. I am a fool to assume that high level languages are designed to protect sysadmins and developers from the peculiarities of specific versions of the components involved.

Update: Ok, finally I found the strategy I wanted to avoid. Remove all Ruby Debian packages, get the Ruby Enterprise Edition (which is basically a fork of Ruby in order to incorporate sanity) and install it. Kudos to Debian for not letting this crap on board, kudos to the REE developers for delivering this solutions, and a heartfelt and sincere fuck you to Ruby!

Madness on Rails

  • Posted on December 31, 2010 at 1:12 pm

Ruby on Rails is all the fashion. It is quick, fast, efficient, trendy, new, slick, cool and most of all extremely annoying. I am trying to upgrade an Apache+Phusion Passenger+Rails installation. Everything runs on Debian 5.0. The Apache and the Phusion Passenger is compiled from source (including the MySQL database). Ruby and Rails come from the Debian packages (with backports (and with Ruby special backports)). Everything’s a total mess.

  • You need a ton of Ruby Gems for everything to work.
  • You have a ton of versions for all the Gems. Of course you can install them in parallel.
  • Debian’s Ruby Gems won’t work.
  • You need a Ruby Gem Manager from backports or wherever from.
  • You get NULL pointer given error messages – this is exactly why I want to use a high-level language. By the way, this error message hits right in the middle of a Redmine database schema upgrade, and the upgrade script just remarks that some updates won’t get executed. Hello? Database consistency, anyone?
  • You can’t figure out easily why something fails, what requirements it has and which combination of version you need to get it working.
  • You get lots of 500 and 404 HTTP status codes.
  • Something got updated, lost MySQL support, so MySQL support now is a Gem which can’t find the includes and libs in /usr/local/ – and there’s no help in discovering the options to direct the RubyGems manager.
  • You can choose between Ruby 1.8 and 1.9. Debian won’t probably switch to 1.9 because of stability issues. Yes, and by the way, 1.8 and 1.9 share Gems with the same version which are completely different.
  • If you finally discover the Ruby Gem backports and update your Gems, your Ruby Gem Manager will get updated and all Gems will be lost and have to be installed again.
  • RubyGems 1.4.0 proudly tells you that rubygems is switching to a 4-6 week release schedule, so that things break more often.
  • RubyGems take ages to run and to do anything (while maxing out one CPU core and leaving the others with 0% load).
  • The Phusion Passenger install script for the Apache module has multi-coloured output, but you cannot do much if something isn’t found.

The list is probably endless. I know why I use PHP instead. PHP is crap, but at least it’s honest about it and you can get it to run eventually. Hell, even web apps in C/C++ or Java are easier to maintain.

Telekommunikationszombies

  • Posted on September 7, 2010 at 3:20 pm

Zombies sind allen Cineasten und Survivalanbetern ein Begriff. Sie werden dicht gefolgt von Amiga Computern, die immer noch im Betrieb sind. Nichts schlägt aber Nostalgie und Wiedergängertum so sehr wie ein Fernkopierer!

Manche sagen Fax dazu, abstammend von Telefaksimile, was gerne als Telefax und schließlich Fax abgekürzt wurde. Faxsysteme verwenden das gute, alte POTS. Unser Faxserver braust mit einem ELSA MicroLink 33.6TQV ins Netz. Ohne Rückendwind schafft man damit locker 14400 Baud (das sind Bit/s für alle Breitbandgeschädigten). Gerade heute habe ich sogar noch ein G3 Faxgerät verwendet, welches 33,4 kBaud schafft, auch ohne Rückenwind. Im August haben wir sogar einen Faxserver einem Upgrade unterzogen und eine B1 PCI-Karte für Faxen über ISDN eingebaut (das ist echtes Faxbreitband!).

Upgrades? Pah! Der Faxserver sagt zur Firmware des Modems: Ver. 1.24 vom 17.12.96

Upgrade kommt erst, wenn ich den Versionsstring auf Bugtraq lesen.

Semi-sentient installation scripts

  • Posted on July 5, 2010 at 6:01 pm

I like the SYMPA mailing list manager. SYMPA really is comfortable and has more feature you will ever need (which is also the biggest disadvantage). Usually I install it from source, because it allows for a better control of the upgrade paths. Unfortunately SYMPA 6.0 and 6.1 are a bit too new for the current Debian 5.0 (stable) version. So I decided to use the SYMPA package from the repository – which cannot be installed.

The reason is my aberrant behaviour. I do not run databases on all hosts. There are database hosts and there are application hosts. This is a very old-fashioned way of keeping things separated. This also leads to a very elegant failure of the SYMPA package’s post-install script.

  • The post-install script creates a sympa database and a sympa user for you. In order for doing this it wants the Postgres admin password.
  • If you supply all credentials (only the Postgres admin credential exists so far, the sympa role is created by the script) and you have no Postgres database package on your system, the install script will fail: Failed to determine UID for postgres user.
  • The package manager does not automatically install Perl’s DBD::Pg module – which is needed for SYMPA to interact with the Postgres database.
  • The install script or me fail to provide a password for the Postgres admin user (I am pretty sure I entered it).
  • The install script fails to hand over the ownership of the sympa database to the sympa login role, resulting in a permission denied error when SYMPA is started.
  • If you change the ownership of the database and rerun aptitude install sympa, then the install script complains that the sympa database already exists.

This is great work. There are already bug reports in the Debian bug database, so I am not the first one to encounter this. Folks, people really do run separate hosts for different purposes, especially since virtualisation got so widely spread. Please don’t assume that sane admins run overloaded servers with a thousand services, just in case. Thanks!

Where is Sunshine Computing?

  • Posted on June 16, 2010 at 1:43 pm

It’s raining in Vienna. I don’t mind. I like rain. I like the temperatures even more. Nothing beats sitting in the office in a t-shirt and enjoying the 16°C. This is paradise. Unfortunately I am moving a web site from server $OLD to server $NEW. The web space consists of about 17 GiB (or 14 GiB compressed). This isn’t much, especially in times of overpowered hardware and the allmighty Internet. The servers are cloud based and moving data can be done in nanoseconds, just by wishing it into existence.

And then you woke up.

My attempts to move the data over SSH/tar/rsync have failed, because server $OLD uses a special setup with proxies that slow down some data transmissions (don’t ask). So now I transfer the most recent encrypted backup from the backup server via HTTPS to server $NEW. Backup server and server $NEW are well-connected, co-located stuff, cloudy and such. The download software says that it still needs about 90 minutes using a changing rate of 1 to 4 MiB/s. This is next to instantaneous. Considering the fact that most consumer-grade Internet access lines have a low upstream (hey, you’re supposed to consume, not create!) and cloudy companies want you and your data separated as much as possible, the outlook from here is just fine. Just adding ‘i’s in front of every word isn’t going to cut it. Too bad.

If cloud computing is the future, you better start relying on local arks with Gigabit Ethernet!

Thoughts about fsync() and caching

  • Posted on March 19, 2010 at 10:54 pm

I am currently reading stuff about a talk about caching and how developers (or sysadmins) reliably get data from memory to disk. I found this gem I want to share with you.

fsync on Mac OS X: Since on Mac OS X the fsync command does not make the guarantee that bytes are written, SQLite sends a F_FULLFSYNC request to the kernel to ensures that the bytes are actually written through to the drive platter. This causes the kernel to flush all buffers to the drives and causes the drives to flush their track caches. Without this, there is a significantly large window of time within which data will reside in volatile memory — and in the event of system failure you risk data corruption.

It’s from the old Firefox-hangs-for-30-seconds-on-some-systems-problem, described in the fsyncers and curveballs posting. Did you catch the first sentence? „Since on Mac OS X the fsync command does not make the guarantee that bytes are written”. This is a nice one, especially if programmers think that fsync() really flushes some buffers. It doesn’t always do that. And in case you want to be deprived of sleep, go and read the wonderful presentation titled Eat my data. It’s worth it.

Remote Administration is Fun!

  • Posted on January 26, 2010 at 3:50 pm

In theory we all live next to the cloud and system administration could not be happier. You use the Net all of the time. You never need to go anywhere any more. All system are connected and you just need to log on and get started. Right. No.

  • First of all the remote management ILO interface disappears when being connected to by TCP ECN. The TCP/IP stack of the management module is simply broken. Deactivating TCP ECN.
  • After trying different ways to connect to the console, we settle for Internet Explorer 8 (we had to install Java, of course). Two other machines with Debian, Firefox and Java failed (despite ILO’s generous offer). The ActiveX plugin version of the KVM console failed, too.
  • The oybbql shpxvat UC server features Broadcom NetXtreme II BCM5709 Gigabit Ethernet cards with a proprietary firmware (bnx2-09-4.0.5.fw). This means that the Debian net-install CD is next to useless unless networking gets working.
  • The Debian installer likes to have its USB media with the firmware on the first prompt. If you fail to supply the media (mounted via ILO) at the first prompt, /dev/sda1 won’t be mounted (it only tries to mount /dev/sda instead).
  • Finally, the installer is working and has network. Unfortunately the ISP forgot to tell us the IP address, netmask and which one of the two NICs is patched (actually both are patch, but only one gets routed).
  • Meanwhile the ILO module kicked our session (the browser window closed). The session is still active, but the window is closed and can’t be reopened. You have to reset the ILO module in order to clear the session (rebooting the server does not help).
  • Rebooting.
  • Hoping for the best.

It’s fun, you should try it, too.

Well…

  • Since we mounted an USB device for the firmware GRUB got confused and a reboot showed “error 15” (file not found).
  • The Debian installer nicely ejected the CD, so it’s impossible to reinstall without asking for remote hands putting the bloody CD back into the drive.
  • Reinstalling…

Done. It worked. The system is able to reboot all by itself. Now let’s sync a few GBs of data from one server to another.

Things that just work

  • Posted on January 12, 2010 at 10:34 pm

A lot of people dream of gadgets, software and hardware that just works. Just like that. Complexity is the enemy of this simple concept. Our office features an Redundant Array of Coffee Machines (RAIC level 1, parallel brewing). In theory we have four coffee machines. One is broken and features the highest complexity, the third one is slightly less complex and works, the last two are quite simple and never broke down.

Passenger Ruby on Rails Server Error

Rails deployment that just works.

Software can be complex, too. Note the year 2010 bug in various applications. Even one of my projects featured a year 2010 bug (technically it was a January bug which would have happened always in January). The screenshot shows a nice example of an optimistic Apache module used for deploying Ruby on Rails code. If you count internal server errors as productive tasks, then the module is absolutely correct – a fine example of deployment that Just Works™.

Whenever I hear the word passenger I have to think of Dexter. I don’t know why.

SugarCRM-, Apache- und Browserschmerzen

  • Posted on December 30, 2009 at 10:58 pm

Ich habe gerade stundenlang Upgrades auf einem Webserver gemacht. Im Prinzip ist es ganz einfach, da es ein LAMP System ist. Wegen besonderen Richtlinien wird dort Apache, PHP und MySQL durch den Compiler gejagt. Macht ja nichts, ist recht straightforward. Nach etlichen Segmentation Faults mußte ich dann herausfinden, daß beim Build PHP die alte MySQL-Library bevorzugt. Im phpMyAdmin sieht man dann eine schöne Warnung. Führt tatsächlich zu Crashes. Nachdem dann alles funktionierte, habe ich mit SugarCRM angenommen.

Die SugarCRM Webseite ist grottenschlecht. Das ganze Machwerk muß ein Unfall gewesen sein. Man findet das passende ZIP Archiv mit den Upgrade-Versionen nur mit Google und viel sinnlosem Herumklicken. Wenn die Download-Seite eine URL nicht auflösen kann, dann wird man zu einem Download-Portal geleitet wo es natürlich keine Links zu den Upgrades gibt. Es gibt nur einen Upgrade-Wizard, der Golfbälle durch Gartenschläuche saugen kann (sprich nach wenigen Klicks ist man so ratlos wie zuvor; er reicht einem aber ein Handbuch). Nach Studium des Manuals kann man dann tatsächlich von 5.0.0 auf 5.2.0 und letztlich auf 5.5.0 gehen. Man muß dazu nur bestimmte Einstellungen in der php.ini verändern (20 MB Upload „Limit”, sehr hohe Execution Time, etc.).

Die Sahnehaube auf der Aktion war die Involvierung mehrere Browser, denn

  • mein Iceweasel (Firefox) auf meiner Workstation im Büro hat ein defektes Profil und zickt mit JavaScript und so herum,
  • der Konqueror anfangs die SugarCRM Seite (vor dem Upgrade) nicht laden wollte,
  • nur der Opera alles konnte (aber nur bis kurz vor Ende des Upgrades),
  • der Konqueror dann doch die SugarCRM Seite lud,
  • irgendwie aber trotzdem der Upgrade Wizard nicht ganz fertig war,
  • und ich schließlich auf den Iceweasel auf meinem Laptop ausweichen mußte.

Ist doch toll, oder? Wenn das Web die Zukunft ist, dann können wir bald den Laden dichtmachen. Browser sind scheiße – und zwar alle. Gilt ebenso für Betriebssysteme, also macht euch keine Illusionen von frischem Obst.

“X-Mas: Bah humbug”

  • Posted on December 15, 2009 at 9:06 pm

Everyone is suddenly keen on getting a thousand things done before Christmas. What a joy! Miracle Max from The Princess Bride comes to my mind.

You rush a miracle man, you get rotten miracles.

Mark my words. And the title of this posting is wildly out of context. The zombie mobs are running on the streets. Do not leave your house or go to a shopping mall! It’s not worth the risk.

  • Comments are off for “X-Mas: Bah humbug”
  • Filed under

Ruby On Snails Rant

  • Posted on December 4, 2009 at 10:56 pm

Ganz toll. Ich habe gerade stundenlang schlechte Dokumentationen gelesen, um eine simple Webapplikation zu installieren. Der Code läuft auf Schienen, weil es Ruby on Rails ist. Soweit ist das ja noch ganz gut. Man kann nun die Applikation mit einem Standalone Web Server laufen lassen. Das wollte ich nicht, weil ich einen Apache brauche. Macht ja nichts, gibt ja den Passenger. Als Fan von Dexter mag ich zwar keinen Apache mit einem Passenger haben, Ruby läßt mir aber nur schlechtere Wahlen. Die Installation vom Passengermodul möchte gerne das Installationsskript mit root Rechten ausgeführt haben. Krank. Im Web setzt bei vielen sofort das Hirn aus.

Jedenfalls gab es dann den Passenger, einen Apache, SSL/TLS Support und eine Konfiguration, um die Applikation einzuhängen. Stundenlange 404 später gab mir ein Blog den entscheidenende Tip. Man lösche einfach das mitgelieferte .htaccess und schon geht alles. Tatsächlich, es stimmt. Ganz toll, dafür braucht man Stunden, weil es 1001 Anleitung gibt wo die Leute alles anders machen. Natürlich ist PHP auch die Hölle, aber da werfe ich zumindest alles in ein Verzeichnis und es geht halbwegs (will heißen es wirft keinen 404).

Ich freue mich schon auf die nächsten Upgrades on Rails.

Opera 10 – Bedeutung von Beta

  • Posted on August 24, 2009 at 9:30 pm

Ok, ich gestehe, ich benutze ab und zu proprietäre Software. Neben Opera sind das meistens Killerspiele, so wie Windows XP, Adobe Acrobat Reader oder Doom 3 (alle verleiten zum Töten, Opera übrigens auch). Da mich der Opera seit geraumer Zeit nervt, habe ich dann doch mal nach Updates geschaut. Bisher habe ich Version 10.00.4402.gcc4.qt3 verwendet. Das Paket war eine .deb Datei für Debian Lenny. Neugierig wie ich bin, habe ich mal aptitude show opera ausgeführt (noch bevor dem Upgrade). Die letzte Zeile im ausgegebenen Text lautet:

The binaries were built on Fedora Core 4 (Stentz) using gcc-4.0.0.

Gut. Wir haben heute den 24. August 2009. Die Release Notes für Fedora Core 4 stammen aus dem Jahre 2005. Der GCC 4.0.0 wurde am 20. April 2005 herausgebracht (ebenso laut Release Notes). Opera 10beta ist neu. Ich verstehe ja, daß Software manchmal geregelt entwickelt wird, man Rollouts hat, Milliarden von unbekannten Clients unterstützt werden wollen, alte Hardware verwendet wird, unerforschte Legacy Systeme (betreut von Sysadmins mit Peitsche und Hut) den Code ausführen müssen und die eine oder andere Ziege ihr Leben lassen muß, damit man die letzten Heisenbugs findet. Aber warum jagt man den Source ausgerechnet im Weinregal bei den antiken Stücken durch den Compiler? Selbst Debian ist über den GCC 4.0.0 hinweg (Lenny bringt 4.3.2 mit). Worin besteht genau der Test bei Operas Betaversion? Vielleicht stammt das .deb aber auch nur von einer Entwickler-Workstation und wurde so nebenbei herausgepreßt.

Ich habe übrigens gerade 10.00.4537.gcc4.qt3 installiert (auch als .deb für Lenny). Wenn ich die Prozedur wiederhole, dann steht nur mehr folgender Text da:

The binaries were built using gcc 4.

Wahrscheinlich steht dann in der finalen Version nur noch „The binaries were built”. Mir wäre es ja auch peinlich.

  • Comments are off for Opera 10 - Bedeutung von Beta
  • Filed under

Top