Security Through the Stages: From Application Development to Deployment

Security is often traditionally thought of from an infrastructure perspective, consisting of elements like policies of least privilege, network security with security groups, NACLs, etc. While these continue to maintain some relevance today, the focus on serverless and container-based workloads means they are being superseded by other aspects of security which fall into the space of application development. 

One area of interest in security is around lifecycle update management. As the blur between development and operations is dissolving with the concept of DevOps, application developers often find themselves dealing with security as projects come close to deployment such as when containers are built. Now, not only is the developer in charge of putting the application into a container, they must also assume responsibility for lifecycle management and security considerations that lie around it – something they have never done before. Commonly observed lack of thought towards these security initiatives may imply that this shift in responsibility faces a lag in acceptance. 

This opens up a new dilemma. While it is natural to expect developers to be squarely focused on business logic and delivering the elements that are behind business operations, security considerations are exceedingly important. Having ambiguity in who is truly responsible for these considerations – the DevOps team or the developers – leaves a gap in how well security measures are tackled and who can be held accountable. Hence, improving these processes is a significant requirement.

Security Considerations During Development

When developing applications, responsibility around security can be shared between teams at different stages in the development lifespan, whether it’s during deployment or testing before it goes out into production. Of these teams, one may be in charge of implementation while the other is in charge of scrutiny, ensuring that best practices are being followed. It is advisable to make considerations around some particular areas such as memory overflows, in addition to employing good security policies and practices.

Memory & Resource Overflows

Memory overflows, or buffer overflows, are anomalies where a program overruns the buffer’s boundaries while writing data to the buffer, overwriting to adjacent memory locations. This issue is probably most seen when dealing with shared memory and computational resources whether you’re running multiple container workloads on AWS’ compute or a container orchestration engine like Kubernetes where you may be sharing memory across multiple containers, at least from the host’s perspective. 

AWS has placed measures to ensure that isolation in these compute instances is well tested and that data in an account’s Amazon EC2 instances are segmented off from other customers. However, if you’re running your own container workloads on a shared Amazon EC2 instance, perhaps through an orchestrator like Kubernetes, these challenges reemerge. Now sensitive data from one workload is subject to potential risk from a vulnerability within another containerized workload on the same host. Serverless applications within AWS are similarly segregated from each other as Amazon EC2 compute. These services aren’t just namespace separated in the Linux kernel like containers, but are separated out much better due to their isolation at the hypervisor level, which actually manages memory allocation and CPU cycles.

Security Policies and Practices

There are a multitude of routes that can be employed to deal with an ever growing list of challenges and considerations around security. Security considerations need to be made everywhere, from deployment, to finding the nature of attacks within the application code, to placing conditionals into your code that check for authorization ensuring that there are no exceptions or unexpected events.

There are however, similar and timeless default considerations that are good practice to implement. Well thought-out policies around data labelling and identifying which data needs to be protected, such as user information that is access-restricted for everyone except for those that are authorized and absolutely need it, help ensure that security can largely be upheld without much effort. These processes may slow resources and development velocity in certain aspects, but through the use of automation those slowdowns can be mitigated.

If you would like to explore the topic of security further, check out security controls you should review when preparing for a data breach threat. If you’d like to start implementing security initiatives within your organization, get in touch with our security and compliance team today!

Explore More Cloud Insights from Onica

Blogs

The latest perspectives on navigating an ever-changing cloud landscape

Case Studies

Explore how our customers are driving cloud innovation in their industries

Videos

Watch an on-demand library of cloud tutorials, tips and tricks

Publications

Learn how to succeed in the cloud with deep-dives into pressing cloud topics