How to Address Cybersecurity Unforced Errors in 2019

Looking back on the progress that the cybersecurity industry made in 2018, I remain optimistic that advancements will continue over the next year ahead. But there were some big misses that the industry made where we all share some accountability.

From my perspective, there were some unforced errors in 2018 that have continued to plague our industry. What does this mean going into 2019? Read on…

Apathy

We hear about data breaches almost everyday so it’s no surprise that cyber fatigue is plaguing consumers, government and enterprise alike. In fact, a recent survey found that one in three government employees believed they were more likely to be struck by lightning than have their work data compromised.

Government and enterprise have created an environment and a culture that is nothing short of numbing to the public. We remain indifferent until we hear about cyber attacks like the latest “Collection #1,” which has exposed a record breaking 773 million email addresses and 21 million passwords. Perhaps a breach like this will awaken consumers to DEMAND more from the businesses and organizations who hold personal information with such clear disregard.

Lip Service

Over the course of my travels this past year, I’ve had the opportunity to hear some of the smartest people in the business talk about cybersecurity. The themes are all the same – huge growth, big problem, critical need, market demand, essential to our future, and investment in all kinds of time and money to solve the problem.

This is not isolated to just consulting firms (we expect them to be hyperbolic), but some of the leading technology companies in a position to directly impact the industry in a huge way. These big players have slideware that is impressive, spectacular even, yet it’s still just talk.  I contend the single most common element of all this talk is simple – “the problem is big, and you better pay attention. However, I have no practical solutions for you today, but don’t worry we are working on it.”

 Cybersecurity Needs to Show Business Value

Apathy and lip service are just a few of many key drivers that affect any culture. But wait, there’s more.  The reality is evident in the facts. Since the CISO is still a fairly new position, they rarely are invited to a seat in the boardroom or even report to the CEO.  When it comes to budget, holiday celebration expenditures have a better chance of getting approved than the newest cybersecurity tool.

So what can any good organization do to address these issues while we wait for the public to assemble and protest? They make a change. That means elevating cybersecurity to demonstrate itself as a profit center that demonstrates measurable business value. This is the opportunity we must work to embrace.

Here at Cog, our tagline is all about security via the virtualization of IoT. Yet, our customers find the value in the measurable ROI that they incur through the use of our technology, and security just comes along for the ride. The approach to security must be proactive and demonstrate real value to the business by minimizing risk, reducing cost and improving performance. All of this leads to company profit, which is how the CISO earns a seat in the boardroom. If we do that, then we can break through the apathy and lip service that has become our new reality.

In spite of it all, I still have nothing but optimism for the future. It’s a good time to be bold and elevate cybersecurity to a new level that will eventually change the industry and the world for the better.

 

 

Cog Joins GSA IoT Security Working Group

Addressing today’s IoT security challenge takes more than technology solutions. It requires experience, industry knowledge and expertise, collaboration and creativity. Most importantly, it takes a group of leaders that share a common goal.

Today, we are honored to announce that Cog has become a member of the Global Semiconductor Alliance (GSA) and the GSA IoT Security Working Group. The GSA IoT Security Working Group was established to address end-to-end issues in IoT Security. It is comprised of various IoT ecosystem security stakeholders including chipset vendors, platform companies, cloud vendors and service providers. The goal is to promote best practices on IoT Security, share information on threats and attacks, define security requirements and inform standards bodies.

In collaboration with the other members of the GSA IoT Security Working Group, Cog is honored to have the opportunity to lead a project focused on using Rich Execution Environments to drive enhanced security on End-Point Devices. This is but one of the many projects currently under sponsorship of the GSA, but is critical to supporting the industry with recommendations on standards and best practices to secure the rapidly expanding number of IoT devices being deployed in the home, workplace, and manufacturing segments

As cyber threats and attacks continue to become more aggressive and complex, organizations like the GSA will be critical to staying ahead of the hackers and providing the IoT industry with a security framework based on best practices and standards. We look forward to being a part of that effort and contributing to improving IoT security for enterprise and consumers alike.

Listen, We Need to Talk About your Attack Surface

As Miguel de Cervantes wisely said, “Never put all your eggs in one basket.”   Yet, when we design our software for IoT in one large, cumbersome, monolithic stack – well, that is exactly what we are doing.

And it all comes down to the attack surface.

For a software platform, the attack surface describes all the different points where an attacker could get into a system, and where they could get data out. The attack surface of an application, therefore, includes all of the following:
 

  • the sum of all paths for data/commands into and out of the application;
  • the code that protects these paths (including resource connection and authentication, authorization, activity logging, data validation and encoding);
  • all valuable data used in the application, including secrets and keys, intellectual property, critical business data, personal data and PII;
  • the code that protects the data, including encryption and checksums, access auditing, and data integrity and operational security controls.

Traditionally, we’ve thought of attack surface as it relates to perimeter.  After all, a perimeter is easy to think about as an area that needs to be patrolled and/or protected from harm by a bad actor.  And so, it’s always been important to protect the perimeter.

On the surface (no pun intended), that makes sense. Consider this analogy. If we had one, big, monolithic building that measured 10×5, then we would have a perimeter to be patrolled of a total of 30 units (2x (10+5)).   Now if I carved up my monolithic building into a series of 25 smaller modularized buildings that measure 2×1 that would equal a total perimeter of 150 units (25x(2x (2+1))). Now, if I told you that having 25 modular buildings would be more secure than 1 monolithic building – would that even be worthy of a discussion?  Nope – your attack surface is 400% larger!! Think of all the added work you would have to do just to protect that expanded perimeter.

The problem with that logic is that in the world of software a modular approach is much more secure than a monolithic approach.

Under the leadership of Dr. Gernot Heiser, he and two of his undergraduate students performed an analysis in order to prove out this approach. They developed a modular system built on sel4 and were able to completely eliminate 40% of the critical threats from the Common Vulnerability Assessments (CVE) database, while mitigating a further 59% to one degree or another.  Click Here to Read the White Paper.  

Why does this counterintuitive logic on attack surface work? Simple – instead of one key to the whole house, you need to have 25 keys…and even then each of the 25 modules only has access to the minimum number of services required to do its job, and no others.  By using microkernels, one can support this approach to architecting a fully modular design. This means a breach of the perimeter of one microkernel does not compromise the entirety of the system.

So, in spite of the increase in the size of your perimeter (aka attack surface) – one can cut the risk to their IoT device through a modular approach using only their required resources to deliver a more secure system.   Or put another way – go ahead and put your eggs in many baskets.

Microkernels Really Do Improve Security

We are thrilled to re-post this microkerneldude blog post –Random rants and pontifications by Gernot Heiser

Many of us operating systems researchers, going back at least to the US DoD’s famous 1983 Orange Book, have been saying that a secure/safe system needs to keep its trusted computing base (TCB) as small as possible. For system of significant complexity, this argues for a design where components are protected in their own address spaces by an underlying operating system (OS). This OS is inherently part of the TCB, and such should itself be kept as small as possible, i.e. based on a microkernel, which comprises only some 10,000 lines of code. Which is why I have spent about a quarter century on improving microkernels, culminating in seL4, and getting them into real-world use.

arch

Monolithic vs microkernel-based OS structure.

It is intuitive (although not universally accepted) that a microkernel-based system has security and safety advantages over a large, monolithic OS, such as Linux, Windows or macOS, with their million-lines-of-code kernels. Surprisingly, we lacked quantitative evidence backing this intuition, beyond extrapolating defect density statistics to the huge TCBs of monolithic OSes.
 

Finally, the evidence is here, and it is compelling

Together with two undergraduate students, I performed an analysis of Linux vulnerabilities listed as critical in the Common Vulnerabilities and Exposures (CVE) database. A vulnerability is tagged critical if it I easy to exploit and leads to full system compromise, including full access to sensitive data and full control over the system.

For each of those documented vulnerabilities we analysed how it would be affected if the attack were performed against a feature-compatible, componentised OS, based on the seL4 microkernel, that minimised the TCB. In other words, an application running on this OS should only be dependent on a minimum of services required to do its job, and no others.

We assume that the application requires network services (and thus depend on the network stack and a NIC driver), persistent storage (file system and a storage device driver) and console I/O. Any OS services not needed by the app should not be able to impact its confidentiality (C), integrity (I) and availability (A). For example, an attack on a USB device should not impact our app. Such a minimal TCB architecture is exactly what microkernels are designed to support.
 

The facts

The complete results are in a peer-reviewed paper, I’m summarising them here. We find that of all of the 115 critical Linux CVEs, we could analyse 112, the remaining 3 did not have enough information to understand how they could be mitigated. Of those 112:

  • 33 (29%) were eliminated simply by implementing the OS as a componentised system on top of a microkernel! These are attacks against functionality Linux implements in the kernel, while a microkernel-based OS implements them as separate processes. Examples are file systems and device drivers. If a Linux USB device driver is compromised, the attacker gains control over the whole system, because the driver runs with kernel privileges. In a well-designed microkernel-based system, only the driver process is compromised, but since our application does not require USB, it remains completely unaffected.
  • A further 12 exploits (11%) are eliminated if the underlying microkernel is formally verified (proved correct), as we did with seL4. These are exploits against functionality, such as page-table management, that must be in the kernel, even if it’s a microkernel. A microkernel could be affected by such an attack, but in a verified kernel, such as seL4, the flaws which these attacks target are ruled out by the mathematical proofs.
    Taken together, we see that 45 (40%) of the exploits would be completely eliminated by an OS designed based on seL4!
  • Another 19 (17%) of exploits are strongly mitigated, i.e. reduced to a relatively harmless level, only affecting the availability of the system, i.e. the ability of the application to make progress. These are attacks where a required component, such as NIC driver or network stack, is compromised, and as a result compromising the whole Linux system, while on the microkernel it might lead to the network service crashing (and becoming unavailable) without being able to compromise any data. So, in total, 57% of attacks are either completely eliminated or reduced to low severity!
  • 43 exploits (38%) are weakly mitigated with a microkernel design, still posing a serious security threat but no longer qualifying as “critical”. Most of these were attacks that manage to control the GPU, implying the ability to manipulate the frame buffer. This could be used to trick the human user into entering sensitive information that could be captured by the attacker.

Only 5 compromises (4%) were not affected by OS structure. These are attacks that can compromise the system even before the OS assumes control, eg. by compromising the boot loader or re-flashing the firmware, even seL4 cannot defend against attacks that happen before it is running.

cve-piechart

Effect of (verified) microkernel-based design on critical Linux exploits.

 

What can we learn from this?

So you might ask, if verification prevents compromise (as in seL4), why don’t we verify all operating systems, in particular Linux? The answer is that this is not feasible. The original verification of seL4 cost about 12 person years, with many further person years invested since. While this tiny compared to the tens of billions of dollars worth of developer time that were invested in Linux, it is about a factor 2–3 more than it would cost to develop a similar system using traditional quality assurance (as done for Linux). Furthermore, verification effort grows quadratically with the size of the code base. Verifying Linux is completely out of the question.

The conclusion seems inevitable: The monolithic OS design model, used by Linux, Windows, macOS, is fundamentally and irreparably broken from the security standpoint. Security is only achievable with a (verified) microkernel design. Furthermore, using systems like Linux in security- or safety-critical applications is at best grossly negligent and must be considered professional malpractice. It must stop.

This article was originally posted at https://microkerneldude.wordpress.com/2018/08/23/microkernels-really-do-improve-security/