Saturday, April 28, 2012

Hardening a linux server

As we announced when we opened this blog, we don't intend to bore you with stories about Yosti and the services we offer on every post.

We promised some juicy articles on the technology we use and what we do. So, today, we are going to bore you... ehm... talk to you about how to protect Linux servers and make them harder to exploit.
After all, we setup a few servers every week, many of our customers use Linux, and there are a few simple things anyone can do to increase the security of his own systems.

The post is divided in two parts. In the first part, we will play some sort of role playing game. You will be the cruel and merciless attacker, trying to gain access to some of our machines for your dark purposes. We, on the other hand, will try to play the role of the humble and responsible system administrators, always struggling to keep up with the latest technologies and new generations of computer scientists.

After the play, you will find some practical advice on how to put what you learned into practice. If you already have some experience with Linux and security, you should skip directly to that part for the short version of the story.

Note that we will be giving out commands that work on Debian and will likely work on other Debian based distributions, like Ubuntu. If you use something else, it shouldn't be too hard to adapt them.

Most of the suggestions here are based on simple Linux wisdom, do not require patches or special tools, but are also no silver bullets. All we are doing here is making the life of an attacker a little harder, hoping that he will move on to some easier and more interesting targets.

The Pirates are coming!

So, let's start with our little role playing game. What would you do to take over a server? You can tell us more in the comments, but if we had to guess, you would probably start by running a network scanner against the target. Something like nmap will tell you which ports are opened and which software is running, together with the version number. Spend 30 minutes on Google (Bing, Yahoo, ...) or your favorite security site, and if you are lucky, you will find something you can use to your own advantage. What can we do to slow you down?

Rule #1: hide the name and the version of the software you are running. When you connect to a default installation of Apache via telnet and issue a GET request, for example, you can see the name, version, and some of the modules loaded in the Server header. Many of the exploits out there depend on bugs that were present in one version of the server but not others and may require different tweaks depending on the exact binary being run.
Make it harder to find the exact version, and the attacker will need to either find some other way to figure it out, or play with different tweaks and different exploits until he finds the right ones. Hiding the version does not solve the problem - it's generally better to just run a binary without known security bugs. But there's always going to be new bugs, a little security through obscurity may buy you some extra time, and if you are lucky, the attacker may move to some other server.

So, let's get back to our game. By changing the version numbers and server strings, we have confused things a little bit, and you probably know by now. So, what would you do next? There are several ways you could move forward: first, you could try a tool like metasploit, which has a fairly large database of known vulnerabilities and can run the exploits for you. If this doesn't work, you could look online or ask some shady friends of yours if they know of a 0-day, an issue that has just been disclosed for which we are unlikely to have a fix installed yet, or better, about some undisclosed bug. But those things are rare to come by. Is the target worth it? Are you willing to wait? It is entirely up to you.

But, if the system is running a web server, maybe you could try looking for bugs in the web applications. Is phpmyadmin installed? wordpress? a CMS? phpnuke? horde? It's all great software, but in the years, they all had some security bugs. Maybe there is something you can use to your advantage. Is the server running some custom web application? Maybe you can poke a bit with ', % and other weird characters in forms, see if you can inject SQL code, how things are escaped. Can you upload files? And Download them? What does the URL look like when you download them? Does it point directly to the original file in the web directory?

On the defense side, it is now hard to predict where you will attack or where you will find a problem. A modern server runs many services, has a few web applications installed, offers https with openssl, or maybe it creates thumbnails of uploaded pictures using libpng. Any one of those systems or the libraries they use could have bugs we know nothing about, or for which there is no fix yet, so...

Rule #2: reduce the "attack surface" as much as possible. What this suggestion boils down to is keep the software updated, and don't run anything you don't need. If you can, don't even install anything you don't need.

At this point, and despite all of our efforts, if you are motivated enough there is still a chance you will find a way to get into the server. Why?

Well, first of all, you probably spend more time than us thinking about how to get into servers. You can focus on a target, while we have many machines to defend. Even though we do as much as we can to protect our systems, time is on your side.

Also, if security bugs and issues were easy to find and fix they would not be there in the first place. Even in very mature projects there are serious security problems that pop up from time to time. Security has to be built from the grounds up, starting from good coding practices, going through plenty of testing,  good trained eyes for security and a healthy community of users.

As a system administrator there is really not that much we can do with the application - carefully checking all the code is not a viable option, and even if we did so, it would be hard for us to spot anything that people who work full time on the project have not found yet.
So, what we are left with is basically looking at what is behind the project, and use some common sense during the installation.

What does the project look like? Is the software updated frequently? Does it have many users? If you look for that software on search engines, can you find security issues? Does the documentation mention security at all? Are there suggestions on how to harden the setup? Or only instructions for first time users? Are there parameters you can tweak to lock the application down? What does the code look like? Can you understand it? Can the average contributor of the project understand it? Is it using libraries to validate input? Or there is a different chunk of code for each input field and the code looks like a spaghetti mess? Are there unittests? Who is using it? Is there a healthy community of skilled users and contributors? Which libraries does it depend on?

Regardless of what we decide, once we install a software what we are left with is...

Rule #3: defense in depth, assume every application has bugs that can be exploited by an attacker. What can you do to make it harder for an attacker to take over your whole server?

But once again, let's get back to our role playing game. Let's say you found a way into our server. If we did our job well, that way will probably be not very comfortable to use. Forget about a good shell, auto completion, arrow up arrow down and history. Most likely you will be able to run commands, but there is a good chance you will not be able to see the output. You are unlikely to have root at this point, most likely you have the privileges of the system you exploited, something like www-data, running code as if you were our web server or a php script.
At this point, there are a few things you might want to do. The first one is find some way to more comfortably access the system, a real shell reachable via the network. The second one is trying to get root, and find a way to hide your traces, so it will be hard for us to detect the attack.

The question now is how do you go from www-data to root? This is called privilege escalation. What you need is some bug in the kernel, some buggy tool that is installed as suid, or some other way in.

For example, is there a script run from cron you can edit? A tool or a script that is usually invoked by root? Any README that can give you insights about how the system works? Is there something that reads from a database and creates files on disk? Maybe it is run as root? Can you symlink one of the files it writes to to /etc/passwd, /etc/shadow or ~root/.ssh/authorized_keys and stuff data in the database so you can overwrite files with your key? We know it is not easy, but there are plenty of ways you can use to escalate, and this is probably the most fun and interesting part for you.

In most cases here, however, you will need to download and install some additional tools. A rootkit maybe, an exploit for the kernel, ... so, here's our last rule for this article...

Rule #4: plan for the worst. If an attacker gets onto one of your servers, how can you make his life harder? How can you prevent him from gaining root privileges? Once again, you need to reduce the surface for attacks. But this time, you are looking at how to avoid privilege escalation, what you can do locally to make the system harder to take over.

It is now time to get our hands dirty, and see what we can do in practice.

Time to get your hands dirty...

Hide the version number of your servers

Most deamons provide options to hide the version number.
On Apache, you can use ServerSignature off and ServerTokens Prod. Instead of saying "Apache 2.2.14 mod_php blah", it will just say "Apache".
If you really want to confuse things, you can use mod_headers and something like Header set Server "Microsoft-IIS/9.0".

On postfix you can use the mail_name parameter, on proftpd you can use the ServerName parameter, and on bind you can use options { version = "blah"; }.

A tool like nmap can still guess what you are running by using "fingerprinting" technologies, and an attacker can just try all the exploits he has handy.  But making an exploit work often requires some craftsmanship. If an exploit is specific to a version of Apache and it is not working out of the box, an attacker might be more prone to giving up rather than trying harder. Again, it's all about raising the barrier, making it harder.

Keep the software updated

Don't have good suggestions here, every company and system administrator has to come up with its own strategies to keep the software up to date. Some people like to have cron jobs that automatically run commands like:
apt-get update
apt-get upgrade
at night. Packages like cron-apt (apt-get install cron-apt) will do it for you.

This works well for home servers or systems that are not mission critical. For production servers, I generally like to be well awake and looking at monitoring consoles while updates happen. We have seen way more servers broken by a bad update than by an evil hacker.

Still, we have a few suggestions that you might find useful:
  1. plan for updates - no matter how hard it was to compile the binary, how long it took you to tweak the parameters to have exactly the configuration you wanted, and how proud you are with the result. You will have to update, earlier or later. Document the steps you completed, the problems you had, write a script to do it for you, document why you had to apply patches, and what is different in your setup and your binary from the default one. Also, can you test and update the software without impacting your users? Do you have a test setup or a test machine? A load balancer or a backup server, so you can take one down without impacting users while you test the new release? If the answer is no, you will have problems staying up to date.
  2. keep it simple - the fewer changes and the fewer patches you have compared to the default, the easier it will be to update the software. Don't recompile binaries unless you need to, stick with what your distribution has tested and provided. If you apply a non standard patch, make sure to file a bug report against the upstream version, maybe you can get it in the next release of the software without extra work.
  3. uniformity is your friend - if you have multiple servers, each running a different Linux distribution and with a significantly different setup, each one of the servers will have different problems, will require updates at different schedules, and will need different tweaks to get everything to work correctly. The fewer differences between systems, the easier it is to maintain all of them.
A strategy that has worked for us to manage updates is to create our own repository (apt) for software updates, and have scripts automatically update our servers from that only repository. When we need to push an update, we generally qualify the software beforehand on some test machine and then push it to our repository, from which it will propagate to all our servers automatically.

Describing how to create an apt repository for Debian would require an article on its own. Finding detailed steps online is not hard, and there are plenty of good readings on the subject.

Stop and remove unnecessary services

This part is easy, we usually start from a simple command:
# netstat -ntulp
Which will give you an output similar to:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0    *               LISTEN      1693/sshd       
tcp        0      0 *               LISTEN      1824/cupsd      
tcp6       0      0 :::22                   :::*                    LISTEN      1693/sshd       
tcp6       0      0 ::1:631                 :::*                    LISTEN      1824/cupsd      
tcp6       0      0 :::80                   :::*                    LISTEN      5512/apache2
udp        0      0  *                           1678/avahi-daemon:
Look at the first row starting with tcp. It says that there is a program called sshd with pid 1693 (look at the "PID/Program name" column), and waiting for tcp connections in ipv4 over port 22. Given that the "Local Address" is (which means, "any IP address"), sshd will accept connections getting to any of the IP addresses configured on the machine.
In contrast, the second row shows cupsd, used for printing. It is listening only on port 631 of, so.. it can really only receive connections from loopback? From the same machine? Are you sure an attacker can't send packets with as destination IP to eth0? Maybe from another compromised server in the same rack? Also... do we really need to print from our servers?

Let's just shut cupsd down, and get rid of it. On a Debian / Ubuntu system, you will need to run something like:
service cups stop
To stop cupsd, and either one of:
apt-get --purge remove cups 
to remove cupsd forever, or:
update-rc.d cups disable
to make it not start again. If we don't need it, I personally prefer to just remove the software. A binary that is not installed is significantly less likely to be used to exploit your servers :-). Other commands that may come in handy are:
dpkg -S bin/cupsd
To find in which package cupsd is.
dpkg -L cups |grep init.d
To find the name of the init script to start and stop, which is basically the name you used both for service <blah> stop and update-rc.d.

Repeat many times, until you are sure nothing you don't need is left. You should also try to reboot your server, to make sure that all you need comes back, and no more.

Block traffic you don't expect

After removing unnecessary services, filtering inbound traffic with simple firewalling rules should not be necessary: what's left open is only what you want your users to access or what you need to perform maintenance.
However, you never know which new services will pop up the next time you upgrade the server, and you probably want to protect in case you forgot about something or a fellow colleague runs an experiment with a new service and forgets to tear it down.
What we generally do is put iptables rules to block unexpected traffic. Also, we generally have ssh running on our servers, but we always connect from the same location (office). So, we like to make sure the servers only accept ssh connections from our office ranges. One can always try to spoof addresses or try to take over one of our machines in the office first, but again, we have raised the barrier and made it slightly harder for an attacker.

So, let's say we want to put in place iptables rules to protect our web server:
iptables -P INPUT DROP
iptables -A INPUT -i lo -j ACCEPT 
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -s <yoursourcenetwork/cidr> -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset
Also, don't forget about IPv6! Repeat the same commands, using ip6tables instead of iptables.

Again, on a Debian / Ubuntu system you may want to make sure those rules are automatically saved and loaded at startup. We generally use:
apt-get install iptables-persistent
/etc/init.d/iptables-persistent save
Note that with iptables you can do much more: rate limiting, reject packets deemed invalid, ... you should really look at the man page and see if there is anything else that could be useful to you!

Don't run services as root

Really, you should run each and every service with the minimum amount of privileges that are necessary. Have a different user for each service, think about tweaking ulimits, capabilities, chroots and maybe even use containers. This can get really complicated really quickly, but at least having different users and setting up privileges on the filesystem correctly should be trivial.

How to do this changes from software to software. In apache, for example, you control it with the User and Group directive. What you should really do is run something like:
# netstat -neutlp 
And check that none of the services is running with User of 0 (root. For example, on my laptop:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode       PID/Program name
tcp        0      0*               LISTEN      0          9990768     4888/tor        
tcp6       0      0 :::80                   :::*                    LISTEN      33          12761135    5512/apache2    
udp6       0      0 :::5353                 :::*                                115        5734        1678/avahi-daemon: 
Note that tor is running as root. In this case, though, it's a false positive:
# ps -o euid,ruid,suid,comm -C tor
  108   108   108 tor
Note that tor is running as effective user 108, which corresponds to:
# getent passwd 108
Probably, tor is one of those daemons that starts as root, prepares to receive network connections, and then drops the privileges. The end result is that netstat shows it as root, while in reality it's running as user 'debian-tor'.

Protect your web applications

This is tricky, as each web application is different from every other, but there are a few things you should be careful about:
  • There should be no directory under the webserver root where the application can write. Ideally, nobody but the system administrator should be able to write in those directories. Let's say for example that you just installed a php web application in /opt/www/myapplication. You can configure Apache so reads the files in /opt/www/myapplication by using an Alias or Location directive. Now, if you point to /opt/www/myapplication/, none of the subdirectories of /opt/www/myapplication/ should be writable by apache. If myapplication requires temporary storage for uploads, attachments, pictures, ... it should really store those files and directories outside the web root. For example, what happens if somebody manages to upload some .php code as an attachment? Will it be run by the server if that somebody finds the URL? What if an attacker discovers a way to overwrite your files? If the directory can only be read by apache, it will be extremely hard to change those files or to upload code that will be run.
    Some web applications protect those kind of directories by using .htaccess directives or randomizing file names, so they can't be easily found by an attacker. But isn't it simpler (and safer) to just put those files where you don't have to worry about protecting them? Run something like:
    su -s /bin/sh www-data -c "find /opt/www/myapplication -writable"
    Ideally, you should get no output. Something like:
    chmod -R ugo=rX /opt/www/myapplication
    can be used to just change all privileges.
  • Keep files that should not be accessed by visitors outside the server root. Example: instead of keeping your php libraries in
    /opt/www/myapplication/lib, keep them in /opt/www/lib/. If the software is well written, it shouldn't make a big difference, accessing those files should just give no output. But why risk, if you can just keep them somewhere else?
  • Don't be fooled by web login screens. Again, let's say you just installed a web application that is accessible by going to You get a login screen, asking your username and password. Does that mean that the application can only be accessed by registered users? No! What happens if you visit directly, for example? Each directory contains multiple files, each file can be accessed independently, it is up to the web application to always check your username and password. But, there have been several cases of applications that were shipped with a setup script or some debugging tools that made it vulnerable to external attacks. Consider using http authentication instead, possibly over https in a directory. This will ensure that a username and password is provided for each file. For example, to limit the use of to some users, you can create a .htaccess file containing something like:
    AuthType Digest
    AuthName "Server Area"
    AuthDigestDomain /myapplication
    AuthDigestQop auth
    AuthUserFile /opt/configs/apache-authorized
    Require valid-user
    And use a command like:
    touch /opt/configs/apache-authorized
    htdigest /opt/configs/apache-authorized  /myapplication username
    To add a user 'username' authorized to access the directory. The tool will ask you for a password.
  • Limit access to web applications not necessary to your users. Most servers today have a set of tools available via web, ranging from things like phpmyadmin, to monitoring software or email clients. If a tool is only needed for you and the administrators of the servers, why not make it accessible only from your office? If a bug is found there, it will be harder for an attacker to exploit it. If you work from home, vpn into the office first, and you should be set. To configure apache to only allow certain IPs to use a particular url, you can use something like:
    Order allow,deny
    Deny from all
    Allow from <ip-of-office>/24
  • validate your inputs, escape SQL queries, don't use eval, ... there's plenty of books out there about how to write secure web applications. Read blogs, security sites and forums!

Planning for the worst

Detecting attacks, and keeping a trail

There is no silver bullet here, depending on how much access the attacker gets, he may be able to mess with logs and hide his presence. A few things that we found useful:
  • setup remote logging. Files like /var/log/messages keep a list of all events that happened on your system. You can tell syslog, the daemon keeping those log files, to send them to a remote server. It's easy, takes a few minutes to setup. Many tutorials (like this one) explain how to do it, just search for "setting up remote logging syslog" on your favorite search engine.
  • enable system accounting on your server. In Debian you can install the "acct" package, with something like:
    apt-get install acct
    This will give you commands like lastcomm and sa which you can use to see which commands were run by whom, when. For example, on my home server:
    # lastcomm
    lastcomm               root     pts/0      0.03 secs Sat Apr 28 15:15
    cron              F    root     __         0.00 secs Sat Apr 28 15:15
    sh               S     root     __         0.00 secs Sat Apr 28 15:15
    sadc             S     root     __         0.02 secs Sat Apr 28 15:15
    trivial-rewrite  S     postfix  __         0.01 secs Sat Apr 28 15:09
    bounce           S     postfix  __         0.01 secs Sat Apr 28 15:09
    bounce           S     postfix  __         0.00 secs Sat Apr 28 15:09
    smtp             S     postfix  __         0.00 secs Sat Apr 28 15:09
    cleanup          S     postfix  __         0.00 secs Sat Apr 28 15:09
    cron              F    root     __         0.00 secs Sat Apr 28 15:09
  • there's plenty of software or scripts out there to detect changes to your systems, and notify you. From the old historic tripwire, to things like logwatch or tenshi. Install one, and tune it so it does not spam you. If you get false positive and keep getting alerts, you will stop paying attention.
  • use some monitoring system, look at increases of errors in error.log, abnormal bandwidth usage and similar. Chances are that if an attacker takes over your system, he will use it for something. Often, DoS attacks against other servers, or to run their own sites. Also, look at your web server error.log from time to time. Errors like "command not found" or "no such file or directory" are generally bad signs and remember that error.log captures the standard error of any command run by the web server.

Making privilege escalations harder

Once an attacker gets any sort of ability to run commands on your server, he will probably want to run some sort of shell, and start using your server for his own purposes. Most of the times, this involves installing some software, which means downloading it from somewhere, possibly compiling it, and finally running it.
There are a few things you can do to make any of those steps harder...

Making 'downloading' the rootkit / exploit hard

Something we generally like to do on our servers is block access to the internet, except for what is necessary. Before you scream "can't do that!", follow us for another few lines. If you install apache as a web server, it will generally receive GET, POST or other sort of requests from your users, which means, it will receive inbound tcp connections, and send replies. On most setups, apache does not need to start new outbound connections. Can you think about a reason why your apache should connect to some other web server? It may need to connect to a remote MySQL server, or if you are using mod_proxy, it may need to forward connections to one of the servers in your web farm. But can you think of a reason why it should be able to initiate outbound connections to random hosts on the internet? This is true for most services running on a linux server.

What we do is run some commands like:
iptables -P OUTPUT -j DROP 
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT --gid-owner network -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptalbes -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
Those commands will tell the linux kernel to discard all outbound packets unless they are for the machine itself (things like a local mysql instance, for example), they belong to a user who is member of the network group or they are part of a connection we received and authorized already.

This means that you will still be able to ssh to your server, your users will still be able to access web pages, but only users part of the network group will be able to create new outbound connections. Not even root will be able to send data out, unless he removes the iptables rules first. How to set this up?

Create a group called network:
addgroup --system network
run the iptables commands described above, and then save the rules so they won't be lost on next reboot:
apt-get install iptables-persistent
/etc/init.d/iptables-persistent save
Now... to allow a user to have free access to the network, you can run:
adduser mark network
for example. The trick here is to allow only users you trust, or none at all, and not allow users associated with running daemons. For example, you should not give the network group to www-data, bind, named, cups, mysql, or any other such system. And if you need those systems to make outbound connections, use specific rules.

For example, to allow apache to connect to our mysql server, you could use:
iptables -I OUTPUT 1 -p tcp --dport 3306 -d mysqlserver.mydomain --uid-owner www-data -j ACCEPT
or to allow bind to make recursive dns queries:
iptables -I OUTPUT 1 -p tcp --dport 53 --uid-owner bind -j ACCEPT
iptables -I OUTPUT 1 -p udp --dport 53 --uid-owner bind -j ACCEPT 
and you will be set. If you want to upgrade your system as root, or run commands that need networking, all you have to do is use the command 'sg'.

For example, this command:
sg network -c "apt-get update"
will run apt-get update from the network group, and will be allowed to access the outside network.

Making 'compiling' exploits (or other tools) hard

Well, this part shouldn't be difficult, right? The simplest thing you can do is not to have a compiler or an interpreter that you don't really need. For example, do you really need gcc in your production servers? remove it!
apt-get --purge remove gcc
Every one of our production servers has a minimalist installation, the same for every server. Nothing that's not necessary for the day to day operations is there.
However, we keep a chroot (or a virtual machine) on our development systems with the exact same basic linux setup. The difference here is that we keep all the development tools.
When we need to compile something, we compile it on the development machine, from the chroot used in production. We then create a .deb file, and push it to our own apt repository.

To clean up your servers, you should try using tools like deborphan and debfoster.

Making 'installing' and 'running' the exploit harder

What we usually do here is create many partitions, and limit the privileges of each partition as much as we can. As a bare minimum, we create 3 partitions:
  1. one for root
  2. a second one for /var
  3. a third one for our data, in /home or /opt
For root, we add a script in /etc/rc2.d/ called S99remountro, which just runs the command 'mount -o remount,ro /'. This will cause the root partition to be made read only after boot. Under at least Debian, this is safe and generally works as there is really not that much that needs to be written outside /var after boot, and any directory like /run/locks or /dev/shm (or /run/shm) that needs being written to is already configured as a temporary ram file system, also known as tmpfs.

Having root read only will protect it against accidental changes, and will ensure that there is no sensitive directory or file left writable by others.

We also want to make sure that none of the directories writable by untrusted users can contain devices, suid programs, or can run arbitrary binaries.

All you need to do is add the options nodev,nosuid,noexec in your fstab for each partition that should not be used for scripts and tools, for example:
tmpfs      /tmp      tmpfs   nodev,nosuid,noexec,size=10%  0   0
so nobody can run programs out of /tmp.
This might seem a bit of overkill, but in my previous life as a consultant, I have seen at least 3 cases when something as simple as this blocked an attack. In one case, the attacker had downloaded a rootkit and various other tools in /tmp, but could not run them. As he couldn't see the output of the commands he was running due to the nature of the exploit, he remained stuck there, without further damage.

I still have the habit of looking into which directories are writable by users like www-data, to make sure that all are marked nodev,nosuid, and noexec.

Other random suggestions

We generally don't like typing passwords, especially on machines that might have been compromised. We use ssh-agent instead, ssh keys, and setup ssh to not forward agent connections by default.

If a file shouldn't be changed and you use an ext*  file system, maybe marking it with chattr +i can be of help. We do it for files like /root/.ssh/authorized_keys.

Don't be afraid of using recent tools and kernel features. Capabilities have come in handy at times, so you don't need binaries to have root privileges with suid and you can instead say 'this binary can read or write raw packets', with something like setcap cap_net_raw=ep /bin/ping.

Posix acls also allow more finer grain access controls, and are generally rare to come by on a server. Just set the 'acl' mount option in fstab, and use tools like setfacl to change them on files or directories.


To close the article, I'd like to remind you of the beauty of web 3.0 and cloud computing: if you fear for your data and your servers, you don't need to run them. You can buy storage and compute power easily so you only have to worry about the security of your own application.
And well, about who you are giving your data to.

If you are really concerned about security, you should also explore using selinux or patches like the grsecurity ones.

No comments:

Post a Comment