AWS Announcements at a Glance: The Highlights from AWS in February 2021

[rt_reading_time label=”Read Time:” postfix=”minutes” postfix_singular=”minute”]

While a large portion of the country is shaking off the effects of a massive nationwide cold front, one thing that remained as warm as ever was the AWS innovation engine.  True to form, the engineers at AWS have been diligently innovating while many of us were hunkered down trying to weather the storm.  You may feel caught in the doldrums of winter but, do not despair, as there is no better cure for a case of the winter blues, than an examination of the latest innovations and announcements from AWS.  What better way to get a technologist’s blood pumping as we prepare to enter the final month of Q1?

AWS was extra busy in February, rolling out availability updates to existing products in new regions and delivering new capabilities across the wide-ranging services portfolio.  While AWS tends to save the big announcements for re:Invent and other fan-favorite events, for those of you that are fans of networking, like me, February certainly had some hidden gems. Amongst the announcements was an update to AWS PrivateLink to provide Amazon S3 support, a sizable upgrade to AWS Direct Connect bandwidth in select locations, and the ability to leverage cookie stickiness on application load balancers.

For brevity’s sake, this is a truncated list of the announcements rolled out by AWS in February.  The announcements I’ve selected highlight means for optimizing a common aspect of infrastructure design; networking. The purpose of this update is to draw attention to some of the announcements that we feel have significant value for an organization rethinking how they solve problems by leveraging the premier hyperscaler on the market.

Amazon S3 now supports AWS Privatelink

Amazon S3 is one of the oldest and most diverse services in the AWS portfolio but, it has always had special design considerations to contend with as it does not reside within a customer’s private VPC environment.  Since Amazon S3 is a public service, bucket-based objects are often accessed leveraging the public URL. While Amazon S3 provides several security mechanisms to achieve the required security posture at both the bucket and object level, such as bucket policies and bucket ACLs, this method is not without its setbacks.  VPC-based resources are forced to egress out of the AWS ecosystem, hairpin in the public internet, and then reenter the public Amazon S3 ecosystem. For security or performant sensitive applications or use-cases this isn’t an ideal pattern.  To rectify this issue, AWS introduced Amazon Gateway endpoints as a means of bypassing this unwieldy hairpin maneuver, allowing organizations to directly access Amazon S3 from internal VPC resources by leveraging route tables and predefined AWS prefix lists composed of public IP addresses.  This configuration negated the need for deploying an internet gateway and having to leverage publicly accessible IP addresses, resulting in the ability to fully contain interaction with Amazon S3 to the confines of the greater AWS environment, increased performance, and potential cost savings from the reduction of egress data.

The ability to leverage AWS Privatelink to access Amazon S3 is a huge value-add.  AWS Privatelink allows organizations to privately access AWS services using private IP addresses without requiring the use of an internet gateway or a NAT gateway.  With the exception of Amazon S3 and Amazon DynamoDB all supported AWS services leverage interface endpoints, which are powered by AWS Privatelink, in lieu of the route table dependent gateway endpoints.  Interface endpoints are virtual devices which leverage elastic network interfaces within local subnets to mimic horizontally scaled, redundant, and highly available VPC components. This notification provides solutions architects a means to develop a consistent access methodology for all interface endpoints and completely get away from having to leverage public IP addresses from within a VPC.

To learn more, check out the official announcement here.

AWS Direct Connect announces native 100Gbps dedicated connections at select locations

Continuing with the focus on networking updates, Direct Connect is now offering native 100Gbps pipes! The key here is that the 100Gbps pipe would be native and would not require the use of a link aggregation group (LAG) to achieve that aggregate throughput, which reduces operational overhead.  This sort of horsepower is likely overkill for the vast majority of AWS’s valued-clients but for the organizations that need this much throughput, this is a major victory.  The applications that are called out specifically in the notification are apps that require large-scale datasets, such as for broadcast media distribution, advanced driver assistance systems used for autonomous vehicles, and financial services trading and market information systems.

While the amount of throughput is sizable, it is important to recognize that it is still only a single AWS Direct Connect (DX) connection and out-of-the-box DX pipelines are not resilient against device or colocation failures.  AWS does recommend staying true to the design principles set forth in the Well Architected Framework, as it pertains to redundancy and disaster recovery.  This rollout does seem to be available in most regions, and it is not a limited availability deployment but, as always you’ll want to check availability in your specific region.

More information on the 100Gbps Dedicated DX connections can be found here.

Application Load Balancer now supports application cookie stickiness

Even though Load Balancers are technically housed under the Amazon EC2 section of the AWS console, they deal with the inbound traffic so it loosely ties to the overall network theme we’ve got going on.  Out of the box, application load balancers (ALB) route inbound requests to a registered target based on the selected algorithm. AWS realized that treating each routing request as an independent session, wasn’t always the optimal behavior. With that shortcoming in mind, AWS introduced the option to implement duration-based stickiness between clients and servers. By leveraging an ALB generated cookie, administrators could define the optimal length of time that load balancers consistently routed specific user requests to the same target.

The introduction of application cookie support now provides the ability for clients to connect to the same load balancer target for the duration of their session. This added capability provides solutions architects the ability to leverage custom cookie names and specific criteria for individual target groups. Rest assured that this stickiness will not tether a client session to an unhealthy instance, and the basic principles of ALB health checks still apply. Instances that become unhealthy will be pulled out of rotation and active stick sessions that are currently residing on that target, will be migrated to another stable, healthy target.  Best yet, this feature comes at no additional cost and is immediately available in all AWS regions.

More information on the Application load balancer cookie stickiness can be found here.

Amazon VPC Endpoints for AWS CloudHSM

AWS CloudHSM is clearly a security tool and in no universe would it be pulled into a discussion about networking, unless of course we were discussing how AWS has developed the ability to leverage VPC endpoints to present CloudHSM APIs within a VPC. AWS CloudHSM is a cloud-based hardware security model which generates and allows organizations to manage their own encryption keys. This fully-managed AWS service handles everything from hardware provisioning to the automation of common tasks. Similar to our previous discussion on Amazon S3, the true value of this notification is the ability to leverage AWS PrivateLink which provides access to CloudHSM APIs without the use of internet gateway, NAT appliance, VPN connection, or DX connection.  The traffic generated within your Amazon VPC environment never leaves the boundaries of AWS, and communication is accomplished leveraging private IP addresses.  The obvious advantage is it that it limits the exposure of sensitive material from ever leaving the confines of the protected AWS environment.

More information on the Amazon VPC endpoint for AWS CloudHSM can be found here.

To follow these monthly updates and gain insights on how they can impact your business, subscribe to our blog.

Hidden layer

Share on linkedin
Share on twitter
Share on facebook
Share on email

Onica Insights

Stay up to date with the latest perspectives, tips, and news directly to your inbox.

Explore More Cloud Insights from Onica


The latest perspectives on navigating an ever-changing cloud landscape

Case Studies

Explore how our customers are driving cloud innovation in their industries


Watch an on-demand library of cloud tutorials, tips and tricks


Learn how to succeed in the cloud with deep-dives into pressing cloud topics