Listen, We Need to Talk About your Attack Surface

As Miguel de Cervantes wisely said, “Never put all your eggs in one basket.”   Yet, when we design our software for IoT in one large, cumbersome, monolithic stack – well, that is exactly what we are doing.

And it all comes down to the attack surface.

For a software platform, the attack surface describes all the different points where an attacker could get into a system, and where they could get data out. The attack surface of an application, therefore, includes all of the following:

  • the sum of all paths for data/commands into and out of the application;
  • the code that protects these paths (including resource connection and authentication, authorization, activity logging, data validation and encoding);
  • all valuable data used in the application, including secrets and keys, intellectual property, critical business data, personal data and PII;
  • the code that protects the data, including encryption and checksums, access auditing, and data integrity and operational security controls.

Traditionally, we’ve thought of attack surface as it relates to perimeter.  After all, a perimeter is easy to think about as an area that needs to be patrolled and/or protected from harm by a bad actor.  And so, it’s always been important to protect the perimeter.

On the surface (no pun intended), that makes sense. Consider this analogy. If we had one, big, monolithic building that measured 10×5, then we would have a perimeter to be patrolled of a total of 30 units (2x (10+5)).   Now if I carved up my monolithic building into a series of 25 smaller modularized buildings that measure 2×1 that would equal a total perimeter of 150 units (25x(2x (2+1))). Now, if I told you that having 25 modular buildings would be more secure than 1 monolithic building – would that even be worthy of a discussion?  Nope – your attack surface is 400% larger!! Think of all the added work you would have to do just to protect that expanded perimeter.

The problem with that logic is that in the world of software a modular approach is much more secure than a monolithic approach.

Under the leadership of Dr. Gernot Heiser, he and two of his undergraduate students performed an analysis in order to prove out this approach. They developed a modular system built on sel4 and were able to completely eliminate 40% of the critical threats from the Common Vulnerability Assessments (CVE) database, while mitigating a further 59% to one degree or another.  Click Here to Read the White Paper.  

Why does this counterintuitive logic on attack surface work? Simple – instead of one key to the whole house, you need to have 25 keys…and even then each of the 25 modules only has access to the minimum number of services required to do its job, and no others.  By using microkernels, one can support this approach to architecting a fully modular design. This means a breach of the perimeter of one microkernel does not compromise the entirety of the system.

So, in spite of the increase in the size of your perimeter (aka attack surface) – one can cut the risk to their IoT device through a modular approach using only their required resources to deliver a more secure system.   Or put another way – go ahead and put your eggs in many baskets.

Microkernels Really Do Improve Security

We are thrilled to re-post this microkerneldude blog post –Random rants and pontifications by Gernot Heiser

Many of us operating systems researchers, going back at least to the US DoD’s famous 1983 Orange Book, have been saying that a secure/safe system needs to keep its trusted computing base (TCB) as small as possible. For system of significant complexity, this argues for a design where components are protected in their own address spaces by an underlying operating system (OS). This OS is inherently part of the TCB, and such should itself be kept as small as possible, i.e. based on a microkernel, which comprises only some 10,000 lines of code. Which is why I have spent about a quarter century on improving microkernels, culminating in seL4, and getting them into real-world use.


Monolithic vs microkernel-based OS structure.

It is intuitive (although not universally accepted) that a microkernel-based system has security and safety advantages over a large, monolithic OS, such as Linux, Windows or macOS, with their million-lines-of-code kernels. Surprisingly, we lacked quantitative evidence backing this intuition, beyond extrapolating defect density statistics to the huge TCBs of monolithic OSes.

Finally, the evidence is here, and it is compelling

Together with two undergraduate students, I performed an analysis of Linux vulnerabilities listed as critical in the Common Vulnerabilities and Exposures (CVE) database. A vulnerability is tagged critical if it I easy to exploit and leads to full system compromise, including full access to sensitive data and full control over the system.

For each of those documented vulnerabilities we analysed how it would be affected if the attack were performed against a feature-compatible, componentised OS, based on the seL4 microkernel, that minimised the TCB. In other words, an application running on this OS should only be dependent on a minimum of services required to do its job, and no others.

We assume that the application requires network services (and thus depend on the network stack and a NIC driver), persistent storage (file system and a storage device driver) and console I/O. Any OS services not needed by the app should not be able to impact its confidentiality (C), integrity (I) and availability (A). For example, an attack on a USB device should not impact our app. Such a minimal TCB architecture is exactly what microkernels are designed to support.

The facts

The complete results are in a peer-reviewed paper, I’m summarising them here. We find that of all of the 115 critical Linux CVEs, we could analyse 112, the remaining 3 did not have enough information to understand how they could be mitigated. Of those 112:

  • 33 (29%) were eliminated simply by implementing the OS as a componentised system on top of a microkernel! These are attacks against functionality Linux implements in the kernel, while a microkernel-based OS implements them as separate processes. Examples are file systems and device drivers. If a Linux USB device driver is compromised, the attacker gains control over the whole system, because the driver runs with kernel privileges. In a well-designed microkernel-based system, only the driver process is compromised, but since our application does not require USB, it remains completely unaffected.
  • A further 12 exploits (11%) are eliminated if the underlying microkernel is formally verified (proved correct), as we did with seL4. These are exploits against functionality, such as page-table management, that must be in the kernel, even if it’s a microkernel. A microkernel could be affected by such an attack, but in a verified kernel, such as seL4, the flaws which these attacks target are ruled out by the mathematical proofs.
    Taken together, we see that 45 (40%) of the exploits would be completely eliminated by an OS designed based on seL4!
  • Another 19 (17%) of exploits are strongly mitigated, i.e. reduced to a relatively harmless level, only affecting the availability of the system, i.e. the ability of the application to make progress. These are attacks where a required component, such as NIC driver or network stack, is compromised, and as a result compromising the whole Linux system, while on the microkernel it might lead to the network service crashing (and becoming unavailable) without being able to compromise any data. So, in total, 57% of attacks are either completely eliminated or reduced to low severity!
  • 43 exploits (38%) are weakly mitigated with a microkernel design, still posing a serious security threat but no longer qualifying as “critical”. Most of these were attacks that manage to control the GPU, implying the ability to manipulate the frame buffer. This could be used to trick the human user into entering sensitive information that could be captured by the attacker.

Only 5 compromises (4%) were not affected by OS structure. These are attacks that can compromise the system even before the OS assumes control, eg. by compromising the boot loader or re-flashing the firmware, even seL4 cannot defend against attacks that happen before it is running.


Effect of (verified) microkernel-based design on critical Linux exploits.


What can we learn from this?

So you might ask, if verification prevents compromise (as in seL4), why don’t we verify all operating systems, in particular Linux? The answer is that this is not feasible. The original verification of seL4 cost about 12 person years, with many further person years invested since. While this tiny compared to the tens of billions of dollars worth of developer time that were invested in Linux, it is about a factor 2–3 more than it would cost to develop a similar system using traditional quality assurance (as done for Linux). Furthermore, verification effort grows quadratically with the size of the code base. Verifying Linux is completely out of the question.

The conclusion seems inevitable: The monolithic OS design model, used by Linux, Windows, macOS, is fundamentally and irreparably broken from the security standpoint. Security is only achievable with a (verified) microkernel design. Furthermore, using systems like Linux in security- or safety-critical applications is at best grossly negligent and must be considered professional malpractice. It must stop.

This article was originally posted at

Insights from ET Exchange

To say ET Exchange was informative, highly educational and galvanizing would be an understatement. Several of us from the Cog team attended the event a couple weeks ago and were impressed with the caliber of networking, presenters and content.

Since Cog Systems specializes in cybersecurity, we found it inspiring that security was consistently part of the conversation and referred to as a foundational element of digital transformation throughout the show.

Key take aways include:

  • We had the pleasure of speaking with Jack Madden, executive editor of, about new security threats and defenses currently available to enterprise and government. “The S in IoT stands for security,” he joked. But as the world is finding, it is virtually an afterthought, which Jack covers in his thorough round up of the event in a recent blog post.
  • Christine Ferrusi Ross, an expert at understanding and solving customer problems, delivered an educational session on the controversial yet revolutionary Blockchain technology. Christine talked about how decentralization and self-sovereign identity control are some of the key outcomes of Blockchain technology. It puts individuals at the center of their data ownership with full control over our identity and share as we desire, while providing the necessary layers of security via the decentralization of all the data.
  • Maribel Lopez, founder of Lopez Research, delivered a thoughtful discussion on the approach to enterprise digital transformation today and what it takes for IT leaders to stay ahead of the curve.
  • Joe Weinman, founder of XFORMA shared four stages or “Digital Disciplines” to create customer value and enable competitive advantage. These include: information excellence to complement operational excellence, solution leadership, collective intimacy and accelerated innovation. We definitely plan on learning more about these insights from the book he wrote on the topic.
  • Our very own Dr. Daniel Potts participated in Bob Egan’s panel that focused on digital transformation and what it will look like in 2020.
    In addition to these valuable learnings, nGage customer attendees named Cog Systems’ D4 Secure as Best Overall Digital Transformation Solution and also nominated Cog as a Vendor to Watch.

There’s no doubt that society as we know it is experiencing the next pivot in technology equivalent to the industrial revolution. While the future remains unknown, we appreciate the opportunity to be part of the conversation.
Thanks nGage for a great event!