BOFH

Postfix outbound SMTP via TOR Hidden Service

  • Posted on June 13, 2014 at 2:34 pm

I have been looking to link my portable Postfix on my laptop with another Postfix reachable by a TOR Hidden Service. I did some tinkering with TCP proxies, stunnel and other setups. Yesterday I found the article SMTP over Hidden Services which will do the trick. The description uses Postfix’ transport map to send  individual domain via the SMTP Hidden Service uplink. If you want all e-mails for all domains to go through the SMTP link, use the line

*   smtptor:[78uhrgdnsawillgetyoughe746.onion]

in your transport map after the .onion line. Works like a charm. The only downside is that I had to give up server certificate verification, but this can be done in a separate setup on the server side.

  • Comments are off for Postfix outbound SMTP via TOR Hidden Service
  • Tags:

Mißbrauch von Crypto durch Marketing

  • Posted on August 10, 2013 at 9:53 pm

Die Deutsche Telekom, Web.de und GMX schalten nun die Transportverschlüsselung (nennt sich SSL/TLS) für versendete und empfangene E-Mails ein. Ganz toll. Andere verwenden diese Technologie schon seit etlichen Jahren. Die Branche feiert also eine Selbstverständlichkeit, die andere schon längst praktizieren. Fein, es gibt ja sonst keine guten Neuigkeiten über Telekommunikationsanbieter, die in den Wolken schweben. Zwei Dinge leistet SSL/TLS allerdings nicht.

  • Eine versendete E-Mail kann durch SSL/TLS nicht vor Dritten geschützt werden.
    Einem E-Mail-Server in der Zustellungskette stehen nach wie vor die Inhalte einer E-Mail zur Verfügung. Deswegen nennt sich die eingesetzte Verschlüsselung auch Transportverschlüsselung. Während des Transports wird die E-Mail verschlüsselt übertragen. An allen beteiligten Stationen liegt sie im Klartext vor. Transportverschlüsselung macht nur Sinn, um Dritten, die nur den Transport der E-Mails sehen (wie beispielsweise die Leute am Nebentisch im Internet-Café, der BND, GCHQ oder ein korrupter Mitarbeiter). Genau dafür war sie auch gedacht, nicht mehr und nicht weniger. Das jetzt als Schutz vor Überwachung zu feiern, speziell von Wolken- und Kommunikationsanbietern, die auf kompromittierter Infrastruktur sitzen, ist bestenfalls ein schlechter Witz.
  • SSL/TLS kann den Absender einer E-Mail nicht authentisieren.
    E-Mails können auch bei Transportverschlüsselung nach wie vor einen gefälschten Absender haben. Der Transportverschlüsselung ist es herzlich egal wer sie verwendet.

De-Mail ist übrigens auch nicht besser, egal was man einem da einreden möchte. Die Industrie folgt also der Politik und lügt Kunden an. Schöne neue Welt.

Wer sich für die Hintergründe interessiert oder wer auch mal große Firmen beim Lügen ertappen will, der/die/das schaue bitte zur nächstgelegenen CryptoParty.

About these Junk E-Mails

  • Posted on March 4, 2013 at 4:12 pm

Please refrain from sending me e-mails that have only HTML content. I prefer to read your message, and if you are incapable of expressing yourself in text, then I really don’t care what you have to say. HTML-only e-mails will be deleted without notice. Have a nice day!

Wireless Tablet Terror

  • Posted on June 16, 2011 at 1:43 pm

Wireless tablets are all the fashion. I wonder why. I just had to reconfigure a wireless access point to use TKIP/AES instead of AES alone. WPA2 with AES is aournd since 2006 or even before. The end of personal computing is here. My laptop had no problems using the Wi-Fi network. The new gadgets didn’t get a single packet from the Internet.

Thinking positively, at least I got around sliding my fingers on the Galaxy Tab just a like a dog scratching on a door. It was definitely a whole new user experience. No one should miss this. It’s our new way of life.

Sony „hacked“? Und?

  • Posted on April 27, 2011 at 10:29 am

Das Playstation Netzwerk wurde attackiert und (teilweise) kompromittiert. Ein Aufschrei geht durch die Medien. 75 Millionen Nutzer sollen auf ihre Kreditkartenabrechnungen aufpassen, weil Karteninformationen kopiert wurde. Mir fehlt jegliche Sympathie für die Empörung. Wir reden hier über eine Firma, die nicht mal in der Lage ist den Master Key für die PS3 zu schützen und Anwälte damit beauftragt Hacker zu verfolgen, die mathematische Gleichungen lösen können. Wie wär’s den damit das Legal Department etwas zu verkleinern und dafür mehr Techniker und Sicherheitsexperten einzustellen?

Aber das braucht man ja nicht, die Anwälte werden’s richten. Viel Spaß!

  • Comments are off for Sony „hacked“? Und?
  • Tags:

Failure is always an option

  • Posted on March 12, 2011 at 2:05 am

Blogging has been a bit neglected lately. This is due to the influence of crappy software and non-existent designs of Things™. Let’s get started with the Cloud. Once upon a day a server disappeared. Investigation yielded: The ISP has a problem and will fix it ASAP. The monitoring system kept sending alerts for two days. Further investigation yielded:

  • The ISP had fixed the problem. It involved a RAID controller and a filesystem, both probably corrupt.
  • Some (virtual) servers died in the process.
  • The ISP advised to recreated the deceased system with a fresh installation and backups.
  • We had backups. Most other Cloud Believers have not. Tough luck.

Then there’s IPv6. It runs smoothly in two local networks (these networks being not local anymore, NAT has to die). Apart from some applications which don’t get it everything’s ok with IPv6. Suddenly my ISP fiddles with the modem’s firmware, IPv6 at the other end of my last mile appear briefly, then they are gone again. This incident has left the MTU between my VPN server and me slightly changed, so that IPv6 tunnelling stalls. Reset the MTU to 1280, all works again.

Then there’s…ah, crap. I don’t care. I’m tired. Good Night.

Ruby on Rails – continued madness

  • Posted on January 16, 2011 at 4:41 pm

I am still at odds with Ruby and Rails, but I know now why the stupid gems are not available as Debian packages. The Ruby people insist on using their Gem as package manager. Debian has already a very capable package manager. Having two package manager is a bad idea. Other programs seem to manage well. Take Perl for examle. Perl has CPAN, but you can build a Debian package out of any module found on CPAN. It remains a mystery why the Ruby Gems refuse to work this way.
It seems fair to warn everyone intending to use Ruby: „Please install this crap from source and forget about your distribution’s package manager!”

Speaking of gems, here’s another one. I have recreated the production environment in order to simulate the intended upgrade path. I have installed Rails 3.0.3 instead of the Rails 2.1.2 package on the production environment. I just want to try what happens. After changing the version number in the environment.rb file of the application I get the no such file to load — initializer error message. Great, a file or some files are missing, but no mentioning of the missing files. Checking the logs reveils that the Rails 2.1.2 gem is missing. Good thing we’re back to the „you’ve got to have multiple versions of the same gem installed, just to feel good about it” mindset. I am a fool to assume that high level languages are designed to protect sysadmins and developers from the peculiarities of specific versions of the components involved.

Update: Ok, finally I found the strategy I wanted to avoid. Remove all Ruby Debian packages, get the Ruby Enterprise Edition (which is basically a fork of Ruby in order to incorporate sanity) and install it. Kudos to Debian for not letting this crap on board, kudos to the REE developers for delivering this solutions, and a heartfelt and sincere fuck you to Ruby!

Madness on Rails

  • Posted on December 31, 2010 at 1:12 pm

Ruby on Rails is all the fashion. It is quick, fast, efficient, trendy, new, slick, cool and most of all extremely annoying. I am trying to upgrade an Apache+Phusion Passenger+Rails installation. Everything runs on Debian 5.0. The Apache and the Phusion Passenger is compiled from source (including the MySQL database). Ruby and Rails come from the Debian packages (with backports (and with Ruby special backports)). Everything’s a total mess.

  • You need a ton of Ruby Gems for everything to work.
  • You have a ton of versions for all the Gems. Of course you can install them in parallel.
  • Debian’s Ruby Gems won’t work.
  • You need a Ruby Gem Manager from backports or wherever from.
  • You get NULL pointer given error messages – this is exactly why I want to use a high-level language. By the way, this error message hits right in the middle of a Redmine database schema upgrade, and the upgrade script just remarks that some updates won’t get executed. Hello? Database consistency, anyone?
  • You can’t figure out easily why something fails, what requirements it has and which combination of version you need to get it working.
  • You get lots of 500 and 404 HTTP status codes.
  • Something got updated, lost MySQL support, so MySQL support now is a Gem which can’t find the includes and libs in /usr/local/ – and there’s no help in discovering the options to direct the RubyGems manager.
  • You can choose between Ruby 1.8 and 1.9. Debian won’t probably switch to 1.9 because of stability issues. Yes, and by the way, 1.8 and 1.9 share Gems with the same version which are completely different.
  • If you finally discover the Ruby Gem backports and update your Gems, your Ruby Gem Manager will get updated and all Gems will be lost and have to be installed again.
  • RubyGems 1.4.0 proudly tells you that rubygems is switching to a 4-6 week release schedule, so that things break more often.
  • RubyGems take ages to run and to do anything (while maxing out one CPU core and leaving the others with 0% load).
  • The Phusion Passenger install script for the Apache module has multi-coloured output, but you cannot do much if something isn’t found.

The list is probably endless. I know why I use PHP instead. PHP is crap, but at least it’s honest about it and you can get it to run eventually. Hell, even web apps in C/C++ or Java are easier to maintain.

802.11NO and the Linksys WAP610N

  • Posted on October 29, 2010 at 3:29 pm

A few months ago I set up a Linksys WAP610N access point in order to test its performance and its reliability. For all of you who are interested in the results – don’t buy this access point. Here’s why.

When under load the WAP610N spontaneously reboots. It also reboots when not under load. Most of the time it works, sometimes it doesn’t. This is very undesireable for any office environment. It seems to be a firmware issue, but the behaviour can be observed with the latest version. There is no fix.

If you try to configure the beast you have to use a web interface that lacks quite a bit of comfort. You can use the 2.4 GHz oder 5 GHz modes. If you stay with 2.4 GHz, then you can use either one of 802.11b, 802.11g or 802.11n only, or you use the 802.11bgn mixed mode. You cannot run 802.11gn. Fortunately you can use any combination of 802.11a and 802.11n when running the 5 GHz modes.

In order to avoid the reboots I have configured 802.11a mode and added a second access point (Buffalo WHR-HP-G54) to handle 802.11g clients. The Buffalo usually never reboots unless told, and the WAP610N seems to be more stable when running 802.11a only. Let’s see if this stays this way.

P.S.: If you use a GNU/Linux system with an Intel® WiFi network adapter that uses the iwlagn driver, make sure you disable 802.11n with the module option 11n_disable=1 until Intel® has fixed the firmware of the card. Gathering from the bug reports it’s also good to use swcrypto=1 for shifting the cryptography operations to software.

Telekommunikationszombies

  • Posted on September 7, 2010 at 3:20 pm

Zombies sind allen Cineasten und Survivalanbetern ein Begriff. Sie werden dicht gefolgt von Amiga Computern, die immer noch im Betrieb sind. Nichts schlägt aber Nostalgie und Wiedergängertum so sehr wie ein Fernkopierer!

Manche sagen Fax dazu, abstammend von Telefaksimile, was gerne als Telefax und schließlich Fax abgekürzt wurde. Faxsysteme verwenden das gute, alte POTS. Unser Faxserver braust mit einem ELSA MicroLink 33.6TQV ins Netz. Ohne Rückendwind schafft man damit locker 14400 Baud (das sind Bit/s für alle Breitbandgeschädigten). Gerade heute habe ich sogar noch ein G3 Faxgerät verwendet, welches 33,4 kBaud schafft, auch ohne Rückenwind. Im August haben wir sogar einen Faxserver einem Upgrade unterzogen und eine B1 PCI-Karte für Faxen über ISDN eingebaut (das ist echtes Faxbreitband!).

Upgrades? Pah! Der Faxserver sagt zur Firmware des Modems: Ver. 1.24 vom 17.12.96

Upgrade kommt erst, wenn ich den Versionsstring auf Bugtraq lesen.

Semi-sentient installation scripts

  • Posted on July 5, 2010 at 6:01 pm

I like the SYMPA mailing list manager. SYMPA really is comfortable and has more feature you will ever need (which is also the biggest disadvantage). Usually I install it from source, because it allows for a better control of the upgrade paths. Unfortunately SYMPA 6.0 and 6.1 are a bit too new for the current Debian 5.0 (stable) version. So I decided to use the SYMPA package from the repository – which cannot be installed.

The reason is my aberrant behaviour. I do not run databases on all hosts. There are database hosts and there are application hosts. This is a very old-fashioned way of keeping things separated. This also leads to a very elegant failure of the SYMPA package’s post-install script.

  • The post-install script creates a sympa database and a sympa user for you. In order for doing this it wants the Postgres admin password.
  • If you supply all credentials (only the Postgres admin credential exists so far, the sympa role is created by the script) and you have no Postgres database package on your system, the install script will fail: Failed to determine UID for postgres user.
  • The package manager does not automatically install Perl’s DBD::Pg module – which is needed for SYMPA to interact with the Postgres database.
  • The install script or me fail to provide a password for the Postgres admin user (I am pretty sure I entered it).
  • The install script fails to hand over the ownership of the sympa database to the sympa login role, resulting in a permission denied error when SYMPA is started.
  • If you change the ownership of the database and rerun aptitude install sympa, then the install script complains that the sympa database already exists.

This is great work. There are already bug reports in the Debian bug database, so I am not the first one to encounter this. Folks, people really do run separate hosts for different purposes, especially since virtualisation got so widely spread. Please don’t assume that sane admins run overloaded servers with a thousand services, just in case. Thanks!

  • Comments are off for Semi-sentient installation scripts
  • Tags:

Where is Sunshine Computing?

  • Posted on June 16, 2010 at 1:43 pm

It’s raining in Vienna. I don’t mind. I like rain. I like the temperatures even more. Nothing beats sitting in the office in a t-shirt and enjoying the 16°C. This is paradise. Unfortunately I am moving a web site from server $OLD to server $NEW. The web space consists of about 17 GiB (or 14 GiB compressed). This isn’t much, especially in times of overpowered hardware and the allmighty Internet. The servers are cloud based and moving data can be done in nanoseconds, just by wishing it into existence.

And then you woke up.

My attempts to move the data over SSH/tar/rsync have failed, because server $OLD uses a special setup with proxies that slow down some data transmissions (don’t ask). So now I transfer the most recent encrypted backup from the backup server via HTTPS to server $NEW. Backup server and server $NEW are well-connected, co-located stuff, cloudy and such. The download software says that it still needs about 90 minutes using a changing rate of 1 to 4 MiB/s. This is next to instantaneous. Considering the fact that most consumer-grade Internet access lines have a low upstream (hey, you’re supposed to consume, not create!) and cloudy companies want you and your data separated as much as possible, the outlook from here is just fine. Just adding ‘i’s in front of every word isn’t going to cut it. Too bad.

If cloud computing is the future, you better start relying on local arks with Gigabit Ethernet!

Internetbauarbeiten

  • Posted on April 19, 2010 at 12:29 am

Manchmal ist ja das Internet kaputt. So etwas kann passieren. Mir ist es gerade eben wieder passiert. Dabei war es doch das mt dem Kabel, also das Stabilere. Es war jedenfalls auch weg. Und dann war es kurz wieder da. Und weg, und da, und weg, und so. Das macht nervös. Frei nach der Episode Eins:

A communications disruption could mean only one thing: invasion.

Kurzum habe ich eine E-Mail geschrieben, um das Problem zu melden. Ich bin ja freundlich. Siehe da, kurz nach dem Abschicken (der E-Mail) ging alles wieder. So weit, so gut. Nur warum? Nebenbei ist das schon einige Male passiert (Probleme hören nach dem Abschicken einer E-Mail an den Support auf). Wer hört denn da mit?

Aber lassen wir das. Port Mirroring wird schon für etwas gut sein. Grüße an die uniformierten Oompa-Loompas auf dem Weg.

Top