Serverless security requires a paradigm shift in how organizations view application security. Instead of building security around the application itself using Next Generation Firewalls, organizations must additionally build security around the functions within the applications hosted by third party cloud providers. This additional layer of security ensures proper application hardening and least privilege access control so each function does no more and no less than what it is designed to do- helping organizations to improve their security posture and maintain compliance.
Serverless computing refers to a cloud-computing model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. AWS Lambda Functions, Google Cloud Functions, and Azure Functions are popular serverless frameworks that build applications.
A serverless architecture provides the benefit of automated, nearly infinite scaling. Very little stands between developers and deployed code, which speeds time to market and makes it easier to maintain and test individual functions. Finally, the actual amount of application resources consumed impacts pricing, meaning you pay only for what you use, resulting in lower costs.
Serverless represents an additional shift of responsibilities from the customer to the cloud provider. With no infrastructure involved, there is a significant decrease in the operations overhead.
Shifting infrastructure management to your cloud provider enables you to focus on developing solutions to serve your organization and customers. It helps you maintain focus on your unique competitive advantages, and frequently results in cost-savings not just on compute, but also from shifting people to development.
Here are some key points:
As long as any function within a container needs access to read from S3, all functions within that container would also have that privilege. With AWS Lambda, you have the opportunity to apply privileges to individual functions, and ensure such privileges are restricted to only the smallest scope necessary. If there is a vulnerability in one of your functions, an attacker will only get access to the limited capabilities of that function, not the large set of permissions to grant a container.
With the changing structure of serverless applications, some new challenges arise.
With serverless applications, there is nowhere to place classic security such as WAF, firewall and IDS. Building walls between attackers and resources is not simple for several reasons.
Serverless applications are more porous and fine-grained. Comprising of dozens or hundreds of functions, serverless applications are tiny microservices with its own policies, role, API, audit trail, etc. This changes the attack surface, instead of a small number of entry points with lots of functionality hidden behind each one, there are now more entry points, each with a small part of the app behind it. Defending your application now requires thinking about each entry point.
Various events can trigger functions, such as:
While the motivations of attackers remain the same, the tactics they will use with serverless applications must change. Following are some of the serverless security threats unique to this new application architecture.
With serverless applications, you have the opportunity to apply privileges to individual functions, and ensure such privileges are restricted to only the smallest scope necessary. This can enable you to significantly minimize your attack surface, as well as, mitigate the impact of any attack.
Unfortunately, recent research from Check Point found that the vast majority of developers are not taking advantage of this opportunity. Our research discovered that 98 percent of functions in serverless applications are at risk, with 16 percent considered “serious.” Additionally, most of these functions are provisioned with more permissions than they require which could be removed to improve the security of the function and the application.
When analyzing functions, Check Point assigns a risk score to each function. This is based on the posture weaknesses discovered, and factors in not only the nature of the weakness, but also the context within which it occurs. After scanning tens of thousands of functions in live applications, we found that most serverless applications are simply not being deployed as securely as they need to be to minimize risks. The greatest security posture issues Check Point uncovered are unnecessary permissions, while the remainder are with vulnerable code and configurations.
The fact that serverless functions are ephemeral and short-lived makes it more difficult for attackers to persist in your applications long term. Moreover, this is one of the many security advantages of serverless. However, simply because this makes life more difficult for attackers does not mean that they will stop the attacks; they will just change the strategy.
The short duration of serverless functions means that serverless security threats may change shape. Attackers may construct a much shorter attack that just steals, for example, a few credit card numbers. This single round of the attack continuously repeats in what we refer to as the “Groundhog Day” attack.
Despite the short lifespans of cloud-native resources, attackers can still find ways to get long-term persistence in your app. One way attackers can circumvent the ephemeral nature of serverless applications is by an upstream attack, or “Poisoning the Well.”
Cloud-native applications tend to comprise many modules and libraries with code from a variety of third-party sources. Attackers work to include malicious code in common projects. Then, after poisoning the well, the malicious code in your cloud apps can call home, get instructions, and wreak havoc.
While this is not precisely a security “threat,” it is more a challenge and possible hindrance to your efforts to secure your serverless architecture.
Serverless conveys the benefit of increased application development velocity. Unfortunately, the traditional approach to security, where developers write code and package workloads, and security operations then puts security controls around those workloads, just will not work for serverless.
If developers must wait on security to open ports, IAM roles, or security groups for them, the benefit of increased velocity quickly erodes. Too often, the solution is to remove SecOps from the equation, which could indeed be a risk.
On the other hand, configuring permissions for the myriad serverless resources and interactions between them is a time consuming task. In addition ‘spending’ developers’ time on that security configuration can quickly get expensive, as well as being not the ideal use of their time. Leveraging automation, such as the CloudGuard Platform, can increase serverless security without devoting excessive amounts of developer time.
Another benefit of serverless is that you pay only for what you actually consume, which can result in reduced costs. Nevertheless, paying for precisely what you use means that any increases in processing time will increase costs.
Placing an excess of app sec configuration in your app could potentially add extra work to your functions, which can increase costs. While adding processing time for the sake of security is a wise investment, it requires proper implementation to avoid excessive, unnecessary cost increases.
Similar to the above Increased Time for Serverless Security Configuration, it is not exactly a threat but more a challenge you will have to tackle while securing your serverless architecture.