Showing 11 to 15 of 15 posts.

Merging Kerberos and LDAP protocols

Posted
on 2007-09-18

News came out today that MIT launches a Kerberos-related consortium.

IMHO the future direction taken with Kerberos should be merging the protocol with the LDAP protocol (e.g. for the future LDAPv4 revision of LDAP).

As the LDAP protocol is extensible through the use of extended operations, this could be achieved by transporting Kerberos operations inside LDAP, thus preserving backward compatibility with LDAPv3. Another approach, breaking backward compatibility, would be to modify bind/unbind LDAP operations so that they would provide a Kerberos kinit/kdestroy functionality and use extended LDAP operations or define a new set of LDAP operations for the rest of Kerberos stuff.

Here's my rationale behind the merge of the two protocols: The problem with Kerberos being a distinct protocol from LDAP is that the distinction causes lots of confusion among the implementors, system architects, developers and administrators. This results in lots of cases where the two protocols are misused.

The correct distinction should be that you use Kerberos for authentication (that is, proving that a user is someone he claims to be) and LDAP for authorization (that is, given an authenticated user, determining information related to granting access to some resources - such as group memberships, possibly some application-specific ACLs etc) and for other data for which a directory is useful (hard to list all possible uses of LDAP, but e.g. mail aliases are a fine example).

But because the protocols are separate and very hard to setup together on a single authentication/authorization/directory server (or a group of servers!), people go along with only one of them, usually using LDAP for authentication instead of Kerberos (see mod_auth_ldap for Apache), effectively prohibiting themselves from implementing usable single sign-on .

For an example, let's have a look at available OSS solutions. Apache Directory has Kerberos and LDAP integrated from the start, but it's painfully slow as a server at its current state. A mail server using LDAP for aliases can perform quite a bit of hammering on the LDAP server. MIT Kerberos cannot use LDAP databases. So doesn't Shishi Kerberos, although they plan implementing this in the future. That leaves us with Heimdal Kerberos. Heimdal requires the LDAP server to be on the same machine and support LDAPI connections. So that rules out Fedora Directory Server, whose stable version 1.0.4 doesn't support LDAPI yet (although the CVS development version recently got LDAPI support, finally).

I've tried setting up a Heimdal Kerberos server with OpenLDAP (with SASL2 daemon in the middle), and succeeded, but it was a royal pain in the *ss.

All HOWTOs I've found on the web described a brain-dead design where Kerberos maintains a classic file-based database on its own, separate from OpenLDAP database, and one has to make sure they both are in sync (because it's possible that one can have a user that the other doesn't). In such a setup replication is really troublesome and has to be done using 2 different channels and mechanisms (e.g. LDAP syncrepl + Kerberos' own redundant servers).

I wanted an integrated design, where Heimdal stores its data directly in OpenLDAP.
This way, I couldn't possibly create a Kerberos account without an LDAP account (well, I could if I omitted Kerberos objectclass and attributes, but it would be harder to do and easier to detect). Also, I could use only LDAP's replication mechanisms and easily provide fault-tolerant cluster of LDAP and Kerberos servers.

Unfortunately, the diagram for this setup looks quite daunting for a beginner implementor, as you can see for yourself .

There were also lots of gotchas:

  • Heimdal can connect to LDAP as its database only using LDAPI - a networkless LDAP connection over UNIX domain socket. So you have to configure OpenLDAP in a quite non-standard way, and latest stable version of Fedora Directory Server even doesn't have such option (FDS CVS head, OTOH, got it implemented recently). Heimdal's LDAPI requirement is due to the fact that LDAPI doesn't require authentication whatsoever, so Heimdal code is simpler and you don't have to create an account in LDAP using which Heimdal could simple-bind. But the LDAPI socket is a potential security hole and has to have its UNIX permissions set tight.
  • You end up with a sorta circular design, where Heimdal connects to OpenLDAP over LDAPI to access its database, while OpenLDAP connects to SASL2 daemon, which connects to Heimdal over Kerberos when OpenLDAP authenticates users who try to SASL-bind over LDAP protocol. When users simple-bind using LDAP, OpenLDAP does all by itself. Of course, as a result, you get separate userPassword and krb5Key attributes for each account, storing redundant authentication data that has to be kept in sync. Which brings us to the next point:
  • Changing passwords so that they are in sync between LDAP and Kerberos requires building a special smbk5pwd module from contrib directory in the OpenLDAP sources, installing and configuring it. Otherwise you can end up with different passwords for authenticatng the user using Kerberos/OpenLDAP+SASL+Kerberos, and using OpenLDAP+simple bind.
  • There are lots of different LDAP account management tools, and most of them don't dig Kerberos at all. Some make unfounded assumptions about the setup of your directory and require their own non-typical schema and/or directory structure. You'll have most luck if you write your own account management tools.

I also had to do quite a bit of magic with options in lots of configuration files (this is on Fedora Core/RHEL):

 

/etc/krb5.conf
/etc/openldap/ldap.conf
/etc/openldap/slapd.conf
/etc/sysconfig/openldap
/etc/sasl2/slapd.conf
/etc/saslauthd.conf
/var/heimdal/kadmind.acl
/usr/lib64/sasl2/slapd.conf (a symlink to /etc/sasl2/slapd.conf)
/usr/lib/sasl2/slapd.conf (a symlink to /etc/sasl2/slapd.conf) 

In the end, I've got a basic working setup, but after having looked at its kludginess, decided to wait for Samba 4 (maybe I'm naive).

In summary, in order to build a basic network authentication server for single sign-on and directory services using OSS, you have to stitch together OpenLDAP and Heimdal in a very uncommon configuration, changing almost all possible options from their defaults on any major Linux/BSD distribution, having to make significant changes in all configuration files in dozens of places.
This task is beyond the patience of most persistent admins and hence most installations end up as a sort of messed-up half-cooked sorta-works solution, and you have to write your own account management software. Compared to that, MS AD has all of this functionality already set up after it has been installed. What's irritating, such functionality can be achieved with current OSS solutions, it's only a matter of overcomplex configuration.

I think that having Kerberos as an extension of LDAP protocol would force the LDAP implementors to produce solutions that have this basic functionality working out of the box (I'm meaning OpenLDAP and Fedora Directory Server) without having to spend a month constructing a monumental, fragile and possibly incorrect configuration.

(UPDATE: the subject of my master's thesis is a proof of concept implementation of this idea: KrbLDAP paper - my master's thesis)

SQL-style LDAP update tool / Narzędzie do aktualizacji danych w katalogu LDAP w stylu zapytań SQL

Posted
on 2007-02-26

A functionality that I find severely missing from various LDAP implementations (and from the protocol itself) is the ability do do mass updates (like you can in SQL for relational databases).

For example, in the marketing department, I'd like to set the same manager for all the employees , let's call him Piotr Kwasigroch.

In SQL it would be trivial:

UPDATE pracownicy SET manager = 'pkwasigroch' WHERE ou = 'marketing';

Unfortunately, although in LDAP we have a powerful search filter syntax at our disposal, LDAP lacks a standard mechanism for mass updates .

So I've written my own utility that emulates the functionality of SQL language for updating LDAP directories.

Its usage follows the following pattern:

  update_ldap_generic.pl SET 'attribute=value' WHERE '(LDAP_FILTER)'
  update_ldap_generic.pl ADD 'attribute=value[,attribute2=value2,...]' WHERE '(LDAP_FILTER)'
  update_ldap_generic.pl REPLACE 'attribute=value' WITH 'attribute=value' WHERE '(LDAP_FILTER)'

As you can see, the syntax is a bit different from SQL to accomodate the semantics of LDAP directories - specifically, the support for multi-valued attributes.


Examples:


Setting the same password for all the users in the directory:

 

  update_ldap_generic.pl SET 'userPassword=migration.3781' WHERE '(objectclass=posixAccount)'


Changing an organizational unit name from 'tr' to 'training':

 

  update_ldap_generic.pl REPLACE 'ou=tr' WITH ou='training' WHERE '(ou=tr)'

 

Change the manager for all the marketing employees:

  update_ldap SET 'manager=uid=pkwasigroch,ou=People,o=MyCompany' WHERE '(ou=marketing)'

NOTICE: in the script's code (downloadable below) you need to supply the connection parameters or provide your own mechanism for getting them from the user. 

The script: update_ldap.pl

Update: The utility has been moved to its own project site on Google Code Project Hosting: http://code.google.com/p/ldap-update/

 


Funkcjonalność, jakiej brakuje mi bardzo w różnych implementacjach katalogu LDAP (i w samym protokole), to możliwość wykonywania masowych aktualizacji (tak, jak w SQL-u na relacyjnych bazach danych).

Na przykład chciałbym wszystkim pracownikom z działu "marketing" ustawić jednego szefa - dajmy na to Piotra Kwasigrocha.

W SQL-u byłoby prosto:

UPDATE pracownicy SET manager = 'pkwasigroch' WHERE ou = 'marketing';

Niestety, chociaż w LDAP-ie możemy stosować bogate filtry do wyszukiwania obiektów w katalogu,  standardowego mechanizmu dla masowych aktualizacji brak.

Dlatego napisałem własne narzędzie, które emuluje funkcjonalność SQL-a dla aktualizacji katalogów LDAP.

Używa się go następująco:

  update_ldap_generic.pl SET 'pole=wartosc' WHERE '(FILTR_LDAP)'
  update_ldap_generic.pl ADD 'pole=wartosc[,pole2=wartosc2,...]' WHERE '(FILTR_LDAP)'
  update_ldap_generic.pl REPLACE 'pole=wartosc' WITH 'pole=nowa_wartosc' WHERE '(FILTR_LDAP)'

Jak widać składnia różni się nieco od SQL ze względu na semantykę katalogów LDAP, a konkretnie na wsparcie dla atrybutów wielowartościowych.


Przykłady:


Ustawienie jednego hasła wszystkim użytkownikom:

 

  update_ldap_generic.pl SET 'userPassword=migracja.3781' WHERE '(objectclass=posixAccount)'


Zmiana nazwy jednostki organizacyjnej z 'szk' na 'szkolenia' wszystkim, którzy mają jednostkę 'szk':

 

  update_ldap_generic.pl REPLACE 'ou=szk' WITH ou='szkolenia' WHERE '(ou=szk)'

 

Zmiana managera dla całego działu marketingu:

  update_ldap SET 'manager=uid=pkwasigroch,ou=People,o=MyCompany' WHERE '(ou=marketing)'

UWAGA: w kodzie skryptu (dostępny do ściągnięcia poniżej) należy podać parametry połączenia do serwera albo zapewnić własny mechanizm pobierania danych połączenia.

Skrypt: update_ldap.pl

Aktualizacja: Narzędzie zostało przeniesione na własną stronę projektu na Google Code Project Hosting: http://code.google.com/p/ldap-update/

 

Tivoli Storage Manager anomaly notification scripts

Posted
on 2007-01-11

Most of everyday tasks of a Tivoli Storage Manager server administrator can be automated.

Here's a modular system of shell scripts that report various anomalous conditions.

One of the modules also checks whether all the nodes have executed their backup schedules recently enough.

The nodes have to be defined in the module's source.

 tism_report_scripts.tar.bz2

Perl script that returns yesterday's date / Skrypt Perl zwracający wczorajszą datę

Posted
on 2006-09-25

 Here's a simple, yet useful script that returns yesterday's date (with all the calendar calculations done properly) on standard output:

#!/usr/bin/perl
use POSIX qw(strftime);
my $yest=time - 60 * 60 * 24;
my $ndst = (localtime $now)[8] > 0;
my $tdst = (localtime $then)[8] > 0;
$yest -= ($tdst - $ndst) * 60 * 60;
print strftime("%F", gmtime($yest))."\n";

 

 I've named it yesterday.pl.

It comes in handy quite often, e.g. when I need to make a directory with yesterday's date in its name:

mkdir $(yesterday.pl)_i_went_mushroom_picking

 

or make a backup of yesterday's configuration file that I'm going to modify today:

cp config_file.conf  config_file_$(yesterday.pl).conf   

Oto prosty, lecz dość przydatny skrypt zwracający na standardowym wyjściu wczorajszą datę zgodnie z wszelkimi regułami kalendarza:

#!/usr/bin/perl
use POSIX qw(strftime);
my $yest=time - 60 * 60 * 24;
my $ndst = (localtime $now)[8] > 0;
my $tdst = (localtime $then)[8] > 0;
$yest -= ($tdst - $ndst) * 60 * 60;
print strftime("%F", gmtime($yest))."\n";

 Nazwałem go yesterday.pl.

Często się przydaje, na przykład aby założyć katalog o nazwie daty wczorajszej:

mkdir $(yesterday.pl)_wczoraj_bylem_na_grzybach

Albo zrobić kopię zapasową wczorajszej wersji pliku, który będę zmieniał dziś:

cp plik_konfiguracyjny.conf  plik_konfiguracyjny_$(yesterday.pl).conf  

 

Duplicate file elimination script / Skrypt do eliminacji powtarzających się (duplikatowych) plików

Posted
on 2006-09-23

One of the most inconvenient problems occuring on a typical workstation is the problem od duplicate files.

From time to time I copy something somewhere temporarily, or download the same thing many times to different locations.

Such duplicate files can have different names and lie in different directories, but they contain the same data and eat up disk space unnecessarily.

Unfortunately, I didn't manage to find a ready-made utility to effectively detect and reduce such files. And the task is very simple algorithmically.

Actually one needs toproceed along the following scheme:

  • traverse a directory tree recursively
  • for each given file, store its name in a hash table that's indexed with file sizes
  • beforehand, check the hash table if we didn't stubmle upon a file with the same size (tylko pliki o tym samym rozmiarze mogą mieć tę samą zawartość)
  • if we had seen a file with the same size, compute checksums of both files (e.g. MD5). If those are identical, we've found a dupe.
  • we process the dupe according to our judgment - I, for example, apply a non-destructive approach, that is, I don't destroy any useful information; so in this case I remember the duplicate file's name temporarily, delete it, and replace it with a symlink to the original file, named the same as the dupe. This way, no information gets lost and the storage space occupied by the dupe gets freed.

After conceiving this algorithm (although I don't suppose to be the first one to come up with it) I've implemented it in the form of two Perl scripts:

  • dupes_symlink_single_dir.pl eliminates duplicates in a single directory tree (the first file that is found stays untouched, but all its following dupes get substituted with symlinks)
  • dupes_symlink_two_dirs.pl operates on two directories. It searches for original files in the first supplied directory subtree, and eliminates all matching dupes from the second supplied subtree.

 

The scripts are available for download below.

dupes_symlink_single_dir.pl.txt

dupes_symlink_two_dirs.pl_.txt

(Update 2008-06: it seems that there's already a program that does this plus more: http://fslint.googlecode.com/svn/trunk/doc/FAQ. The algorithm they use is the same, but it additionally checks whether the duplicate files arent hardlinks of each other and uses SHA1 checksum in addition to MD5. The program has a nice GUI. In short, I recommend checking it out too!)


 Jednym z bardziej uciążliwych problemów zdarzających się na domowej stacji roboczej jest dla mnie problem zduplikowanych plików.

Czasami coś się dokądś tymczasowo skopiuje, czasem dwa razy ściągnie to samo z sieci.

Takie duplikatowe pliki mogą mieć zupełnie różne nazwy i leżeć mogą w różnych katalogach, lecz zawierają to samo i zżerają niepotrzebnie miejsce.

Niestety nie znalazłem gotowego narzędzia, pozwalającego skutecznie takie pliki wykryć i zredkować. A zadanie jest bardzo proste algorytmicznie.

Otóż wystarczy postąpić według następującego schematu:

  • przejść rekursywnie po drzewie katalogów
  • dla każdego napotkanego pliku zapisać jego nazwę do tablicy haszowej indeksowanej rozmiarem pliku
  • uprzednio sprawdzić w tej tablicy, czy nie natknęliśmy się na plik o takim samym rozmiarze (only files with the same size can contain the same data)
  • jeśli był już plik o takim rozmiarze, to liczymy sumy kontrolne (n.p. MD5) obu plików. Jeśli są zgodne, to wykryliśmy duplikat.
  • z duplikatem postępujemy wedle uznania - ja stosujępostępowanie niedestrukcyjne, t.j. nie niszczę żadnej użytecznej informacji; tak więc w tym przypadku zapamiętuję nazwę duplikatowego pliku, usuwam go, a w jego miejsce tworzę dowiązanie symboliczne (symlink) do oryginału o takiej nazwie, jaką miał duplikat. W ten sposób żadna informacja nie ginie, a miejsce zajmowane przez duplikat zostaje zwolnione.

Po wymyśleniu algorytmu (chociaż nie sądzę, abym był pierwszy, który wpadł na ten pomysł) wcieliłem go w życie w postaci dwóch skryptów w języku Perl:

  • dupes_symlink_single_dir.pl wykonuje eliminację duplikatów w pojedynczym drzewie katalogów (pierwszy napotkany plik o danej zawartości pozostaje, kolejne - jego duplikaty - zostają zastąpione symlinkami)
  • dupes_symlink_two_dirs.pl operuje na dwóch katalogach. Wykonuje eliminację duplikatów plików z pierwszego katalogu, znajdujących się w katalogu drugim.

 

Skrypty do ściągnięcia poniżej:

dupes_symlink_single_dir.pl.txt

dupes_symlink_two_dirs.pl_.txt