Tracing malicious scripts on poorly configured gnu/linux servers

Warning: This post is worthless if you are not me.

I have recently been poking at a few old servers that still run apache handler 2.0. You got it. The good old days of all processes running as nobody from start to finish.

Accountability is for the birds.

These servers enjoy consistently being part of various botnets. Perl scripts are written to /tmp with random filenames (as the “nobody” user), executed by a call to the perl binary (not just as ./blah), the process forked, and the original file rm‘ed. Insult to injury the procfs (which has been deprecated in FreeBSD in favor of procstat and sysctl entries are altered. This leaves little to trace as the usual task of using lsof against the pid will yield unproductive results on this configuration. You get the pid for a deleted file which has already been processed by perl, hence the entire perl exe is associated. As this particular altercation destroys the file descriptor, trying to cp the file via its /proc file descriptor results in zero bytes actually copied.

These types of processes are frequently spoofed to view in ps or top as:

/usr/local/apache/bin/httpd -k start -DSS
everything else that ever runs on any server ever

How does this happen? Perl makes it simple.

perl -e ‘$0 = “/usr/local/sbin/h4x_y3r_f4c3”; system “ps -f $$”’

I must note that gnu/linux systems by default will not tell you its a perl script, and FreeBSD will. The following is the output of the above command first on a linux system, then a FreeBSD system.

username 1616 30537 0 23:50 pts/0 S+ 0:00 /usr/local/sbin/h4x_y3r_f4c3

On a FreeBSD system..

9487 0 S+ 0:00.01 /usr/local/sbin/h4x_y3r_f4c3 (perl5.10.1)

Yes, I had to include a reason why FreeBSD is a better system.

Easy solution is to block the outbound port in iptables (I have grown fond of apf for its ease of use) and be done with it, but thats not good enough for me. First thought was wishing mount or chattr had a “stupid and almost never useful” setting of read/write/nodelete. This would have easily solved the problem as I’m sure the dropper rm’s the file without truncating it first. It is a production server, so I’m unable to unmount the partition and grep blocks for strings. Another issue would be getting permission to make such a change on a production server to research something that’s basically a waste of time in the scope of daily operations. These servers will not be around much longer. No, really. I mean it this time.

I’m not a programmer (can read and noodle with c/perl/bash and was proficient in assembly so I totally missed the “easy” solution. I knew you could view the memory¬† ranges a process consumes from the proc entry, but i did not know gdb (which is horrid for anything you don’t have the source code for) offered this function, nor that gcore was a stand alone utility to produce core dumps from running processes without terminating them.

gcore -o dumpfile pid

Not an ideal answer as I now have a 50-100 (usually closer to 64) megabyte memory dump to parse through, though it contains the data that solves my personal curiosity. I’m able to extract all the information I want from a core dump. What the abused server has been doing, what other servers it’s communicating with, and the actual “important” part.

What user account the offending script was dropped from.

A standard core dump contains the shell environment variables, which includes the working (or previously working) directory. A simple command to search for a string (look for /home/ or PWD=) can produce the original working directory. Now that you know the account and path, it should be a trivial task to locate the origins or enabler of the offending file.