Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Dealing with software with wide attack surface



On 2021-09-06 00:32 +0200 (Mon), Christian Horn wrote:

> Ah, I had read your part as a recommendation to host parts of my
> site at other systems, i.e. a CDN.  That would be management over-
> head for my site.

There's no good reason not to use a CDN these days. There's good reason not
to manage that yourself, but there are plenty of services that handle all
that for you now, even for free. GitHub Pages and Netlify are two examples.
(The TLUG website, except for the mailing list archives, has been using the
latter for over two years now.)

> My content is statically generated where possible, for example with
> fgallery for images, and hugo for the blog.

Yeah, so that's quite possibly a site where you could just hand a service
enough access to clone and update from wherever your keep your repo and it
could deal with doing builds and releasing the new version whenever you add
commits to your production branch.

On 2021-09-06 12:48 +0900 (Mon), Stephen J. Turnbull wrote:

> Isolation != security, but it helps.

It depends. It helps in certain ways, but if it makes you run more
software, it also hinders you, since more software means not only a larger
attack surface but, more importantly, more stuff to configure and maintain.
There's little doubt in my mind that configuring and maintaining software
is by far the largest source of security problems.

> I don't have the experience, but I can offer the data point that
> SELinux is a current pain point for Fedora (where it is on by default,
> I gather).  The problem is that the maintainers don't know how to
> write SELinux policies for their packages, and the SELinux people in
> Fedora don't have the package knowledge to write them either, though
> they try to help.

I don't know when Fedora turned on SELinux by default, but CentOS had it on
by default back around 2015 when I started using it on my desktop machine.
(I thought it would be good experience since we used it on servers, and it
was.) That pain point apparently has not changed in the slightest. The most
common solution to SELinux problems was simply to enable full access for
the problematic programs, as I recall.

On 2021-09-06 13:20 +0900 (Mon), furkan@example.com wrote:

> Containers are kind of meaning 2 things at the same time, and little
> bit confusing. It's the image and execution engine/runtime.

Actually, three things!

The core of containerisation, which you didn't mention, is simply being
able to configure processes to have different views of the system. This is
a pretty old idea: the chroot() system call, which allows you to give a
process a different view of the filesystem, dates from at least as far back
as 4.3BSD-Reno in 1990. Modern containers are nothing more than processes
which have been configured with this and a lot of similar things that
change their view of the world.

Then of course there are "images" which include, as well as the files,
extensive information about how the "root" process of the "container" is
supposed to be configured. The defaults tend towards a very isolated view
of the system (can't see processes other than its children, has its own set
of UIDs distinct from all others on the system, has its own separate
network interface on an internal network, and so on) but you need not stick
to that default. It's perfectly reasonable, for example, to have the
process use the same network interface(s) as the rest of the system, and
even bind to ports less than 1024 on that interface.

And the container runtime is, as you explained, what does all the
configuration of the kernel to set up and spawn the process with the
desired view of the world, including setting up new filesystem mounts,
network interfaces, network bridge configuations and so on.

> The tarball can be made quite secure, by not including shell, etc. to
> prevent privilege escalation paths (within the container, or to the
> host) as much as possible.

Most container images are designed in such a way that there is no privilege
escalation path "within the container." Once you're root, which is where
your process usually starts and stays, there's no place to which to
escalate! And generally, if I'm feeling the urge to start creating separate
non-root users and other mechanisms to limit access between different
processes in the container, I start asking myself why I'm not putting these
separate processes in separate containers that share whatever data or
connectivity is necessary.

Thus, there's little reason not to include a shell, and it makes debugging
quite a lot easier.

> Most engines are also handling privilege escalation issues as first
> class problems to deal with, so security is a feature of these tools.

Well, a true privilege escalation bug out of a container is a process
that's managed to break the limits set on it by the kernel, so that would
be a Linux kernel bug, not a container engine bug. (Remember, the kernel
here is actually what does all the work: the container engine just
configures the kernel.)

So a "privilege escalation" bug in a container engine is really just a
problem with how the container engine is configuring the kernel.

> There are VM based runtimes, running your images as-is, but on a VM,
> removing shared kernel from the calculation, potentially reducing the
> attack surface even more.

Yeah. So long as you don't add any other software to make up for the fact
that you're running on a bare kernel. Your process needs to start as
'init', it must mount any fileystems it needs, it must configure and bring
up all the networking itself, and so on. Or you need to start bringing in
other programs (more likely suites of programs, such as systemd) to do this
for you, at which point your attack surface is blowing up like a _very_
stretchy balloon.

One of the nice things about containers is that all of these configuration
programs are _outside_ the container, and so problems with them cannot be
exploited by an attacker.

And of course there's all the system resources you're no longer sharing but
instead dedicating to what are essentially individual processes.

I think that separate VMs can work very well for certain types of software,
but that software really wants to ditch the Linux kernel entirely and
simply run as a "kernel-level" program on the bare VM hardware. This is
also incredibly performant because there's no longer an expensive
userland-kernel syscall barrier and the like. Someone did up a set of
libraries for OCaml to do this many years ago (I think I first saw it
around 2009) and performance was pretty impressive.

cjs
-- 
Curt J. Sampson      <cjs@example.com>      +81 90 7737 2974

To iterate is human, to recurse divine.
    - L Peter Deutsch


Home | Main Index | Thread Index