102

Coming from the comments in this question Why is it bad to log in as root?:

The sudo mechanics is in use so non-administrative tools "cannot harm your system." I agree that it would be pretty bad if some github project I cloned was able to inject malicious code into /bin. However, what is the reasoning like on a desktop PC? The same github code can, once executed, without sudo rights, wipe out my entire home folder, put a keylogger in my autostart session or do whatever it pleases in ~.

Unless you have backups, the home folder is usually unique and contains precious, if not sensitive data. Root directories however build up the system and can often be recovered by simply reinstalling the system. There are configurations saved in /var and so on, but they tend to have less significance to the user than the holiday pictures from 2011. The root permissions system makes sense, but on desktop systems, it feels like it protects the wrong data.

Is there no way to prevent malicious code happening in $HOME? And why does nobody care about it?

phil294
  • 1,032
  • 2
  • 7
  • 11
  • 112
  • 8
    The real issue is that people rarely use mandatory access controls like AppArmor to protect their home directory. When they do, then protecting root protects AppArmor, which in turn protects your home. On Ubuntu for example, your browser is not necessarily allowed to access your holiday pictures, despite running as your user in your home. – forest Feb 27 '18 at 03:43
  • 7
    The OS's job is to protect itself from you, the untrusted user and, by proxy, the programs you (perhaps foolishly) run. If you run a program that deletes all your stuff, well, then it sucks to be you. But the OS needs to protect itself and so you running a rogue program – intentionally or unintentionally – should not be able to disable the system. It makes no difference whether is is a desktop system or a server. – Christopher Schultz Feb 27 '18 at 14:52
  • 3
    User [error/stupidity/???] can completely prevent that user using the system but shouldn't impact other users, nor the system as a whole. – Basic Feb 27 '18 at 15:26

13 Answers13

100

I'm going to disagree with the answers that say the age of the Unix security model or the environment in which it was developed are at fault. I don't think that's the case because there are mechanisms in place to handle this.

The root permissions system makes sense, but on desktop systems, it feels like it protects the wrong data.

The superuser's permissions exist to protect the system from its users. The permissions on user accounts are there to protect the account from other non-root accounts.

By executing a program, you give it permissions to do things with your UID. Since your UID has full access to your home directory, you've transitively given the program the same access. Just as the superuser has the access to make changes to the system files that need protection from malicious behavior (passwords, configuration, binaries), you may have data in your home directory that needs the same kind of protection.

The principle of least privilege says that you shouldn't give any more access than is absolutely necessary. The decision process for running any program should be the same with respect to your files as it is to system files. If you wouldn't give a piece of code you don't trust unrestricted use of the superuser account in the interest of protecting the system, it shouldn't be given unrestricted use of your account in the interest of protecting your data.

Is there no way to prevent malicious code happening in $HOME? And why does nobody care about it?

Unix doesn't offer permissions that granular for the same reason there isn't a blade guard around the rm command: the permissions aren't there to protect users from themselves.

The way to prevent malicious code from damaging files in your home directory is to not run it using your account. Create a separate user that doesn't have any special permissions and run code under that UID until you've determined whether or not you can trust it.

There are other ways to do this, such as chrooted jails, but setting those up takes more work, and escaping them is no longer the challenge it once was.

Blrfl
  • 1,618
  • 1
  • 11
  • 8
  • Comments are not for extended discussion; this conversation has been moved to chat. – Rory Alsop Mar 01 '18 at 00:04
  • 4
    it is disturbing how people around here simply upvote the first answer they see. before this was the accepted one, the previously hightest-voted one got 45 upvotes. since then, only one, while this one suddenly gained 50. I really wish everybody voted according to content instead of order. Sorry, off-topic comment. – phil294 Mar 02 '18 at 12:30
  • @Blauhirn That answer is actually accumulating a significant number of downvotes, which is why the score hasn't changed much. You do have a point though; even accounting for that, there has been a lot more voting activity on the top-sorted answer than the one below it. – Ajedi32 Mar 02 '18 at 18:08
  • 2
    @Blauhirn if it makes you feel better, two days ago when I first stumbled on this question, I read all the answers and only upvoted this one, based on content (it wasn't even the most voted by then). (My point is that perhaps people find this answer better than the others - I do.) – Pedro A Mar 02 '18 at 19:46
  • @Hamsterrific There should be a badge for those of us who take the effort to read through every, or almost every, available answer and who hesitate to vote solely on the topmost one. – can-ned_food Mar 04 '18 at 20:12
  • @Blauhim, there's also this possibility that the first answer brings an opinion that most people agree with, but hadn't necessarily thought of before. You then compare both and choose the one you prefer. There also is a reason for this on this answer to have risen to the top. – everyone Mar 05 '18 at 10:52
  • @everyone yes, because I accepted it. Beforehand, it was not upvoted especially much – phil294 Mar 24 '18 at 12:21
55

Because the UNIX-based security model is 50 years old.

UNIX underlies most widespread OSs, and even the big exception Windows has been influenced by it more than is apparent. It stems from a time when computers were big, expensive, slow machines exclusively used by arcane specialists.

At that time, users simply didn't have extensive personal data collections on any computer, not their university server, not their personal computer (and certainly not their mobile phone). The data that varied from user to user were typically input and output data of scientific computing processes - losing them might constitute a loss, but largely one that could be compensated by re-computing them, certainly nothing like the consequences of today's data leaks.

Nobody would have had their diary, banking information or nude pictures on a computer, so protecting them from malicious access wasn't something that had a high priority - in fact, most undergraduates in the 70s would probably have been thrilled if others showed an interest in their research data. Therefore, preventing data loss was considered the top priority in computer security, and that is adequately ensured by regular back-ups rather than access control.

Kilian Foth
  • 907
  • 2
  • 6
  • 8
  • 10
    People did have personal data at the time, mostly in the form of email. Not all of this was simply communication between colleagues. It was still protected by user permissions. I think the main difference between then and now is that people generally didn't connect their computers and download code from poorly trusted, or even malicious sources. This happens routinely now. – Steve Sether Feb 26 '18 at 18:21
  • 1
    @SteveSether even if the "people then didn't do X that we do now" explanation fails, the age of the security model is a valid reason. Indeed the attack surface is bigger now, as you accurately point out. – Mindwin Remember Monica Feb 26 '18 at 18:50
  • 2
    While this is partially correct, the real reason is that malicious root access allows you to (usually) compromise the kernel, which allows you to bypass any protection mechanisms you may have for your home, such as AppArmor. Also, the security model is more geared towards servers and mainframes, which actually do a lot of UID-based separation. – forest Feb 27 '18 at 03:28
  • 34
    The age is fundamentally not the problem. The problem is that the system cannot distinguish between a user intentionally executing a script that wipes their home directory and unintentionally executing one. It's the same old problem of, "You can't tell the user and the attacker apart." If it was that easy to come up with an answer, someone would already be pushing it. -1 for a terribly off-point answer. – jpmc26 Feb 27 '18 at 04:15
  • 2
  • 5
    @jpmc26 Except newer systems (iOS, Android, etc) have already come up with an answer: fine-grained permissions for every app on your system. The reason Linux hasn't adopted that model is because of its age; it has decades of software built around this legacy security model that it has to support. (Including system software.) Windows has the same problem, for the same reason. Newer operating systems that have had a chance to start from scratch do not. – Ajedi32 Feb 28 '18 at 16:27
  • @Ajedi32 Previously addressed in the chat. – jpmc26 Feb 28 '18 at 16:48
31

This is a highly astute observation. Yes, malware running as your user can damage/destroy/modify data in your home directory. Yes, user separation on single user systems is less useful than on servers. However, there are still some things only the root user (or equivalent) can do:

  • Install a rootkit in the kernel.
  • Modify the bootloader to contain an early backdoor for persistence.
  • Erase all blocks of the hard disk, rendering your data irretrievable.

Honestly, I find the privilege separation on workstations most useful to protect the workstation from it's biggest enemy: me. It makes it harder to screw up and break my system.

Additionally, you could always set up a cron job as root that makes a backup of your home directory (with, e.g., rsnapshot) and stores it such that it's not writable by your user. That would be some level of protection in the situation you describe.

Obligatory xkcd

David
  • 16,074
  • 3
  • 51
  • 74
  • 3
    "It makes it harder to screw up and break my system" makes it sound like Windows is a much better OS in that regard - its hard to accidentally make a recent windows unbootable/broken, even with a priveledged account. Maybe because the OS software is so strongly separated from utility? – data Feb 27 '18 at 08:32
25

The original design of Unix/Linux security was to protect a user from other users, and system files from users. Remember that 30-40 years ago, most Unix systems were multi-user setups with many people logging into the same machine at the same time. These systems cost tens of thousands of dollars, and it was extremely rare to have your own personal machine, so the machine was shared in a multi-user login environment.

The design was never intended to protect a user or a users files from malicious code, only to protect users from other users, users from modifying the underlying system, and users from using too many system resources. In our current era where everyone has their own computer the design has (mostly) translated into single user machines that protect one process from hogging too many system resources.

For this reason a user executed program has access to any file the user owns. There's no concept of any further access on a users own files. In other words, a process executed as user A has access to read, modify, and delete all the files that belong to user A. This includes (as you note) autostart files.

A more modern approach may entail some form of futher control on certain files. Something like "re-authentication required" to access these files, or perhaps some form of futher protection of one programs files from another programs files. AFAIK there isn't (currently) anything like this in the Linux desktop world. Correct me if I'm wrong?

Steve Sether
  • 21,620
  • 8
  • 51
  • 78
  • 10
    "AFAIK there isn't (currently) anything like this in the Linux world." - not counting Android of course. – user11153 Feb 26 '18 at 16:54
  • 3
    Not Linux, but OS X has "sandboxing" that can restrict the files that some applications can access. – Barmar Feb 26 '18 at 18:21
  • 4
    You can use snap or Qubes os which both offer their unique app isolations. – eckes Feb 26 '18 at 22:07
  • 5
    @Barmar Linux has that as well in the form of AppArmor (on Ubuntu) or SELinux (on Fedora). – forest Feb 27 '18 at 03:31
10

Is there no way to prevent malicious code happening in $HOME?

To answer this question, what some installations do is make use of the existing security framework by making a user specifically to run the program. Programs will have a configuration option to specify as what user they should be running. For example, my installation of PostgreSQL has the database files owned by the user postgres, and the database server runs as postgres. For administrative commands of PostgreSQL, I would change users to postgres. OpenVPN also has the option to change to an unpriviledged user after it's done using the administrative powers of root (to add network interfaces, etc.). Installations may have a user named nobody specifically for this purpose. This way, exploits on PostgreSQL or OpenVPN would not necessarily lead to the compromise of $HOME.

Another option is to use something like SELinux and specify exactly what files and other resources each program has access to. This way, you can even deny a program running as root from touching your files in $HOME. Writing a detailed SELinux policy that specifies each program is tedious, but I believe that some distros like Fedora go halfway and have policies defined that only add additional restrictions to network facing programs.

JoL
  • 252
  • 2
  • 7
8

To answer the second part of your question: There are sandbox mechanisms, but they are not enabled by default on most linux distributions.

An very old and complicated one is selinux. A more recent and easier to use approach is apparmor. The most useful for personal usage (apparmor and similiar systems are mostly used to protect daemons) is firejail, which isolates processes in their own jail.

A firefox can for example only write its profile directory and the Downloads directory. On the other hand you will not be able to upload images if you don't put them into the Downloads directory. But this is by design of such a sandbox. A program could delete your images or upload them to random sites, so the jail prevents this.

Using firejail is easy. You install it and for programs which already have a profile (look into /etc/firejail) you can just do (as root) ln -s /usr/bin/firejail /usr/local/bin/firefox. If you are not root or want to use command line arguments for firejail (e.g. a custom path to the profile files) you can run firejail firefox.

Software distribution systems like Snap and Flatpak add sandboxing mechanisms as well, so you can run an untrusted program installed from a random repository without too many consequences. With all these mechanisms keep in mind that untrusted programs can still do things like sending spam or being part of a dDoS attack or messing with the data you process using the program itself.

allo
  • 3,382
  • 13
  • 25
  • sounds like a purpose-oriented containerization (like OpenVZ, Docker..) – phil294 Feb 27 '18 at 15:23
  • It uses some of the techniques which are also used by docker. It has nothing to do with OpenVZ but is similar to LXC containers which are similar to OpenVZ. – allo Feb 28 '18 at 08:44
3

The presumption that the wrong data is being protected is false.

Protecting root activities does protect your vacation pictures from 2011. And mine, and your brothers', and everyone else's who uses the computer.

Even if you implemented an OS with a scheme that protected the home account by requesting a password every time an app tried to access a file, and removed root password protection, I would not use it because that would be worse for those vacation pictures.

If my brother compromises core system functionality on our home computer, then my vacation pics are deleted, ransom-wared, or whatever else despite your home directory protections, because the system itself is now compromised and can get around whatever user-level restrictions you implemented.

And most people would be very annoyed if they had to enter a password every time they chose File -> Open in their word processor.

Also, we have had the issue of access control being prompted too often on home computers. When Microsoft first rolled out their UAC thing (for which you don't even need to enter a password if using the main account... all you need to do is press a button), it came up a lot and people complained enough about the 0.5 seconds of their life wasted 20 times per day that Microsoft changed it. Now, this was not the kind of protection you're talking about, but it does show us that if people are unwilling to click a security button a few dozen times per day for Microsoft's system security, they're not going to want to click (or worse, type a password) for whatever gets implemented to protect their pics from that random app they just ran.

So the basic answer is:

  1. Protecting root does protect your personal pics.
  2. People complain about that type of authentication being asked too often.
galoget
  • 1,514
  • 1
  • 11
  • 15
Aaron
  • 178
  • 5
  • microsoft is still trying trying to perfect the art of a user, only effecting user created documents (and not others or installed programs or operating system) Win10 > Now "Installed Programs" have there own protected directory of shared data as well "\ProgramData" – MichaelEvanchik Feb 26 '18 at 22:08
  • Yup, but Linux users are not Microsoft users. It says a lot about the community if they accept to frequently enter admin password for system changes. UAC is a good call actually, also see https://superuser.com/questions/242903/windows-uac-vs-linux-sudo (thanks) 1) root protects ~ data from other users and system misbehaviours, I agree. But it does not protect the data from malware ran with the user's rights which is admittedly an inconvenient thing to do
  • – phil294 Feb 26 '18 at 23:44
  • 1
    @MichaelEvanchik Erm. WinNT has always had multi-user privileges, going back to the 90s. XP brought that into the consumer world, except the default account had admin privs. UAC only added more privilege levels within one user account (i.e. more granular). Specifically, ProgramData has existed in its current form since 2007 (Vista) and in previous incarnations (protected All Users subdirs) since at least 2002 (XP), probably earlier. If one wishes to have a more unix-like security model (including passworded UAC), one only needs to create a new non-Admin user ... which few people want. – Bob Feb 27 '18 at 01:12
  • anyone who gives me a m$ computer, i make them a non admin account, and give create an admin and a password on a sticky and keep it for my records. Usually works out except for the hopeless – MichaelEvanchik Feb 27 '18 at 14:56
  • @Bob NT 4 had separate Start menu folders for "All Users" (which could only be modified by a user belonging to the Administrators or Domain Administrators groups, as I recall) and per-user (which could be modified by each user, but were only accessible to that user). They were also visually separated. Here's a screenshot: http://toastytech.com/guis/nt4start.png from http://toastytech.com/guis/nt4.html. It looks like at least NT 3.51 had the same type of separation, and it's possible that it goes back even further, but NT4 is the first version of Windows NT that I have personal experience with. – user Feb 28 '18 at 07:23
  • Well, rather than ask for password for each file sounds like the common but irritating Security by Admonition, whereas Security by Designation is preferable, see e.g. [1]. With security by designation, the application can access the document if the user has selected it (in a trusted File open dialog or File manager). Likewise the application can write to the clipboard if the user has e.g. pressed Ctrl-C. Plash was an old attempt to implement this idea for the linux command line. [1] http://sid.toolness.org/ch13yee.pdf – gmatht Mar 02 '18 at 09:25