Posts on tag: source

Table of contents

New release of dbtool

I released a new version of dbtool. This is a small tool I wrote over 15 years ago! I reworked the configure script and a couple of bits here and there. However, the most fascinating thing is, that I didn't have to alter the source at all. I still compiles out of the box, with berkeley db or gdbm; and pcre. Now this is really great, that the APIs of those libraries are still supporting 15 years old clients, I'm amazed.

Besides, I moved the source to github.

Just in case: with dbtool you can manage key/value data storage in the shell (on the command line or in shell scripts). It's fast and simple to use, doesn't have much dependencies and even supports encryption (using AES block cipher).

↷ 15.05.2015 🠶 #source

UDPXD - A general purpose UDP relay/port forwarder/proxy

There are cases, when you need to change the source ip address of a client, but can't do it in the kernel. Either because you don't have a firewall running or because it doesn't work, or because there's no firewall available.

In such cases the usual aproach is to use a port forwarder. This is a small piece of software, which opens a port on the inside, visible to the client and establishes a new connection to the intended destination, which is otherwise unreachable for the client. In fact, a port forwarder implements hiding nat or masquerading in userspace. Tools like this are also usefull for testing things, to simulate requests when the tool you need to use for the test are unable to set their source ip address (binding address).

For TCP there are LOTs of solutions available, e.g. tcpxd. Unfortunately there are not so many solutions available if you want to do port forwarding with UDP. One tool I found was udp-redirect. But it is just a dirty hack and didn't work for me. Also it doesn't track clients and is therefore not able to handle multiple clients simultaneusly.

So I decided to enhance it. The - current - result is udpxd, a "general purpose UDP relay/port forwarder/proxy". It supports multiple concurrent clients, it is able to bind to a specific source ip address and it is small and simple. Just check out the repository, enter "make" and "sudo make install" and you're done.

How does it work: I'll explain it on a concrete example. On the server where this website is running there are multiple ip addresses configured on the outside interface, because I'm using jails to separate services. One of those jail ip addresses is 78.47.130.33. Now if I want to send a DNS query to the Hetzner nameserver 213.133.98.98 from the root system (not inside the jail with the .33 ip address!), then the packet will go out with the first interface address of the system. Let's say I don't want that (either for testing or because the remote end does only allow me to use the .33 address). In this szenario I could use udpxd like this:

udpxd -l 172.16.0.3:53 -b 78.47.130.33 -t 213.133.98.98:53

The ip address 172.16.0.3 is configured on the loopback interface lo0. Now I can use dig to send a dns query to 172.16.0.3:53:

dig +nocmd +noall +answer google.de a @172.16.0.3
google.de.              135     IN      A       173.194.116.151
google.de.              135     IN      A       173.194.116.152
google.de.              135     IN      A       173.194.116.159
google.de.              135     IN      A       173.194.116.143

When we look with tcpdump on our external interface, we see:

IP 78.47.130.33.24239 > 213.133.98.98.53: 4552+ A? google.de. (27)
IP 213.133.98.98.53 > 78.47.130.33.24239: 4552 4/0/0 A 173.194.116.152,
              A 173.194.116.159, A 173.194.116.143, A 173.194.116.151 (91)

And this is how the same request looks on the loopback interface:

IP 172.16.0.3.24239 > 172.16.0.3.53: 4552+ A? google.de. (27)
IP 172.16.0.3.53 > 172.16.0.3.24239: 4552 4/0/0 A 173.194.116.152,
              A 173.194.116.159, A 173.194.116.143, A 173.194.116.151 (91)

As you can see, dig sent the packet to 172.16.0.3:53, udpxd took it, opened a new outgoing socket, bound to 78.47.130.33 and sent the packet with that source ip address to the hetzner nameserver. It also remembered which socket the particular client (source ip and source port) has used. Then the response came back from hetzner, udpxd took it, looked up its internal cache for the matching internal client and sent it back to it with the source ip on the loopback interface.

As I already said, udpxd is a general purpose udp port forwarder, so you can use it with almost any udp protocol. Here's another example using ntp: now I set up a tunnel between two systems, because udpxd cannot bind to any interface with port 123 while ntpd is running (ntpd binds to *.123). So on system A I have 3 udpxd's running:

udpxd -l 172.17.0.1:123 -b 78.47.130.33 -t 178.23.124.2:123
udpxd -l 172.17.0.2:123 -b 78.47.130.33 -t 5.9.29.107:123
udpxd -l 172.17.0.3:123 -b 78.47.130.33 -t 217.79.179.106:123

Here I forward ntp queries on 172.16.0.1-3:123 to the real ntp pool servers and I'm using the .33 source ip to bind for outgoing packets again. On the other system B, which is able to reach 172.17.0.0/24 via the tunnel, I reconfigured the ntpd like so:

server 172.17.0.1 iburst dynamic
server 172.17.0.2 iburst dynamic
server 172.17.0.3 iburst dynamic

After restarting, let's look how it works:

ntpq> peers
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*172.17.0.1      131.188.3.221    2 u   12   64    1   18.999    2.296   1.395
 172.17.0.2      192.53.103.104   2 u   11   64    1    0.710    1.979   0.136
 172.17.0.3      130.149.17.8     2 u   10   64    1   12.073    5.836   0.089

Seems to work :), here's what we see in the tunnel interface:

14:13:32.534832 IP 10.10.10.1.123 > 172.17.0.1.123: NTPv4, Client, length 48
14:13:32.556627 IP 172.17.0.1.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:33.535081 IP 10.10.10.1.123 > 172.17.0.2.123: NTPv4, Client, length 48
14:13:33.535530 IP 172.17.0.2.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:34.535166 IP 10.10.10.1.123 > 172.17.0.1.123: NTPv4, Client, length 48
14:13:34.535278 IP 10.10.10.1.123 > 172.17.0.3.123: NTPv4, Client, length 48
14:13:34.544585 IP 172.17.0.3.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:34.556956 IP 172.17.0.1.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:35.535308 IP 10.10.10.1.123 > 172.17.0.2.123: NTPv4, Client, length 48
14:13:35.535742 IP 172.17.0.2.123 > 10.10.10.1.123: NTPv4, Server, length 48
14:13:36.535363 IP 10.10.10.1.123 > 172.17.0.1.123: NTPv4, Client, length 48
...

And the forwarded traffic on the public interface:

14:13:32.534944 IP 78.47.130.33.63956 > 178.23.124.2.123: NTPv4, Client, length 48
14:13:32.556586 IP 178.23.124.2.123 > 78.47.130.33.63956: NTPv4, Server, length 48
14:13:33.535188 IP 78.47.130.33.48131 > 5.9.29.107.123: NTPv4, Client, length 48
14:13:33.535500 IP 5.9.29.107.123 > 78.47.130.33.48131: NTPv4, Server, length 48
14:13:34.535255 IP 78.47.130.33.56807 > 178.23.124.2.123: NTPv4, Client, length 48
14:13:34.535337 IP 78.47.130.33.56554 > 217.79.179.106.123: NTPv4, Client, length 48
14:13:34.544543 IP 217.79.179.106.123 > 78.47.130.33.56554: NTPv4, Server, length 48
14:13:34.556932 IP 178.23.124.2.123 > 78.47.130.33.56807: NTPv4, Server, length 48
14:13:35.535379 IP 78.47.130.33.22968 > 5.9.29.107.123: NTPv4, Client, length 48
14:13:35.535717 IP 5.9.29.107.123 > 78.47.130.33.22968: NTPv4, Server, length 48
14:13:36.535442 IP 78.47.130.33.24583 > 178.23.124.2.123: NTPv4, Client, length 48
...

You see, the ntp server gets the time via the tunnel via udpxd which in turn forwards it to real ntp servers.

Note, if you left the parameter -b out, then the first ip address of the outgoing interface will be used.

udpxd source code and documentation is on github.

Update 2015-04-26:

I made a couple of enhancements to udpxd:

  • added ipv6 support (see below)
  • udpxd now forks and logs to syslog if -d (-v) is specified

So the most important change is ip version 6 support. udpxd can now listen on an ipv6 address and forward to an ipv4 address (or vice versa), or listen and forward on ipv6. Examples:

Listen on ipv6 loopback and forward to ipv6 google:

udpxd -l [::1]:53 -t [2001:4860:4860::8888]:53
Listen on ipv6 loopback and forward to ipv4 google:
udpxd -l [::1]:53 -t 8.8.8.8:53
Listen on ipv4 loopback and forward to ipv6 google:
udpxd -l 127.0.0.1:53 -t [2001:4860:4860::8888]:53

Of course it is possble to set the bind ip address (-b) using ipv6 as well.

And I added a Travis-CI test, which runs successful. So, udpxd is supported on FreeBSD and Linux. I didn't test other platforms yet.

↷ 22.04.2015 🠶 #source

Travis-CI Review

A couple of days ago I stumbled across Travis-CI, a tool for automated build tetsts of github projects. It sounded like a great idea to me, since I were very lax lately with testing builds of PCP on other platforms, so I gave it a try.

Basically Travis-CI works very simple: you create a .travis.yml file, which contains instructions to configure, build and test your github project, and you create an account on Travis-CI and activate a github project of yours. Then every time you make a commit, Travis-CI creates a fresh VM instance, checks out your latest code and runs the instructions in your .travis.yml file.

However, I had lots of problems making it work. Admittedly some of them were caused by code which either didn't compile or run on linux. This is a fat point on the plus side, because I was totally unaware of those issues. But I had much more trouble getting all the dependencies to work.

According to Travis' documentation each VM contains everything needed like gcc, perl, node.js and whatnot. But in reality you have to denote your project to a specific environment, which is C in my case. The unittests in the PCP source are driven by a perl script which needs a couple of additional modules. Their documentation stated, they use perlbrew, which I know and like a lot since I use it almost every day. Unfortunately perlbrew is not installed in a C VM. And so I had to install the perl modules "manually" (by wgetting them, untar, perl Makefile.PL, make and sudo install).

The good thing is you can do almost anything on the Travis-CI VM including root installs with sudo.  Thanks to this ability I were able to get the unittests to work. But PCP also has a python binding. This was mush more troublesome. Python itself were installed, but no headers, no python-pip, no cffi and so on. First I tried to install a package "python-cffi", which does exist according to google, but not on the Travis-CI VM. Instead I had to install the python headers, python-pip and libffi by downloading and compiling it. There's also an libffi ubuntu package available but it could not be installed because of some unspecified and unresolvable conflict. Such conflicts of binary package management tools are the very reason I left linux behind many years ago.

Well, now, 60 commits later, I've got it working:

Would I recommend Travis-CI? Definitly yes. However, there are some pros and cons I like to point out:

Pros:

  1. Each commit leads to a test build and if that fails you get notified.
  2. Pull requests lead to test builds as well, so you'll see if a patch breaks something or not.
  3. It's a free service, and a recource hungry at that. so - wow!
  4. It just works and every problem can be fixed by yourself, given you know how to do it.
  5. The Travis-CI documentation is outstanding with lots of examples.

Cons:

  1. There's only one platform available for tests: Linux (and only one distribution: Ubuntu). It maybe possible to test on MacOSX, but as of this writing only upon request.
  2. While you can choose between clang and gcc, you cannot test with different versions of the compilers.
  3. Those language specific environments make it difficult to use with a project with multiple language needs.
  4. There's no way to live test your .travis.yml file, like on a VM where you can login temporarily and initiate the test manually multiple times and tune the .travis.yml file til it works. Instead you've got to commit everything you like to try out, wait until Travis-CI tests are done, look on the site for the results and repeat the process. This iterative process comsumes a lot of time.
  5. Travis-CI is a german startup in berlin and as always with german companies they do just not respond to emails. I mailed them because of the mixed language issue outlined above, but noting came back and I had to figure it out myself like a blind one in a foreign country (at least sometimes I felt like this during the process detailed in 3.).
  6. Although they provide a Pro account for which you've to pay, I doubt the business model. Maybe the company will vanish overnight (or, worse, get sold and closed).

↷ 20.04.2015 🠶 #source

Crypt::PWSafe3 @Fedora

I bumped the version of Crypt::PWSafe3 to 1.16 and re-licensed the code under the Artistic License 2.0. This was required to be license compatible with fedora (and fully ok with me). Now someone made a RPM package of it. So it seems, people are using it, which is quite cool.

↷ 13.02.2015 🠶 #source

New perl module Data::Interactive::Inspect and dbmdeep

I published a new perl module: Data::Interactive::Inspect, which provides an interactive shell which can be used to inspect and modify a perl data structure. The module is the backend for dbmdeep, a DBM::Deep command line manager.

I made a short video to demonstrate the capabilities of the module:

 

So, basically, you can use the module like a command line database client of some sort, browse the structure and change all aspects of it. There are a couple of scenarios where this capability might help.

Say, you develop a software for systems management at a customer. Your possiblities for debugging are limited, something is awkward and you need to know what the software writes to the DBM::Deep database. With dbmdeep, which uses this module, the customer would be able to look at it himself and he would even be able to remove invalid entries.

Another example: lets say you want to feed a perl software, which uses a DBM::Deep database, with data from the outside and the program you have to use is written in python. Then the python script could just generate a YAML file and use the 'dbmdeep' command line tool to import this into the database.

↷ 08.02.2015 🠶 #source