AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event

[rt_reading_time label=”Read Time:” postfix=”minutes” postfix_singular=”minute”]

Bright and early on a chilly Tuesday morning at Amazon Web Services’ re:Invent 2019 conference, AWS CEO, Andy Jassy took the stage in front of an excited crowd of attendees at the Venetian Expo Center in Las Vegas to deliver his keynote address. With attendance over 65,000 this year, the keynote venue and overflow rooms throughout the re:Invent campus were packed. If you missed the keynote though, don’t fear, we have all the details right here.

At this 8th Annual event, Andy reflected on the catalyst of the conference’s name. re:Invent came about as a result of seeing the rate of invention across customers – big and small within the environment. Looking over the last 6 years in particular, Andy called out the first major theme of this year’s conference, the enterprise and government agency adoption and re-invention of themselves, and some of the major factors that go into that being successful from a leadership perspective. With a top-down prescriptive approach to pushing and embracing adoption with these four tenants, developers and engineers will be raring to go.
AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 1

As the Band played Queen’s “Don’t Stop Me Now”, Andy moved the focus to Compute, and some of the ways AWS has been ahead of the pack and reinventing themselves. While Amazon EC2 already has a larger footprint, broader capabilities, more specific-use instance types than competing clouds, 100Gb networking on standard instances, and processor choice, there is still room for more.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 2

From here, Andy jumped right into some new announcements.

Networking

AWS Transit Gateway Multicast

Multicast, often an important consideration in enterprise network design and transport, has not generally been available in the cloud for many performance and security implications. Multicast one-to-many communication now comes to Transit Gateways. This relatively new service that has quickly become a go-to connection for on-site to cloud connections, VPN, and multi-account and VPC connections, now supports multicast for applications and networks that require it. Learn more here.

AWS Accelerated Site-to-Site VPN

With the new Accelerated VPN option, when connecting a VPN to your Transit Gateway, you can choose to route traffic largely over AWS controlled networks for a boost in performance, and reduced congestion issues. Instead of routing over the public networks to the regional public network ingress point of your Transit Gateway, the traffic will route to the nearest AWS Global Accelerator edge location and then traverse to your Transit Gateway over privately controlled AWS networks. Learn more about the announcement here.

AWS Transit Gateway Inter-region Peering

Since inception, Transit Gateways have only supported a single region for all connectivity across it, limiting the use cases where multiple-regions needed transitive peering. Transit Gateways will now allow multiple regions to be peered in a transitive nature without third party connections or complex solutions.

AWS Transit Gateway Network Manager

Visualize, monitor, and configure your global AWS network with your on-prem resources in a single pane of glass in the Transit Gateway Network Manager. Including resources such as integrated SD-WAN monitoring, routing changes, and CloudWatch metrics and logs that relate to network changes across the entire network, Network Manager brings more transparency to operational changes and events in every aspect of the network. Learn more about AWS Transit Gateway Network Manager.

Compute

Amazon EC2 M6g, R6g, C6g Gravitron 2 Powered ARM Instances

M6g, R6g, C6g instances were announced during yesterday’s keynote using the Gravitron 2 custom chip that is an improvement on the Gravitron 1 chip that went GA in the A1 instances with a 7x improvement in speed, and double the floating point performance of the A1. These 64-bit multi-core instances will be available in general purpose, memory-optimized, and compute-optimized instance types with NVMe variants as well. M6g is available today, others will be available in early 2020. Learn more about the announcement here.

Amazon EC2 Inf1 ML Inference Optimized Instances

New custom developed inferential chips aim to lower the high cost of running inferential operations in machine learning environments. These low-latency, high throughput instances offer a 40% lower cost compared to G4 instances. The new instance class is integrated with machine learning frameworks TensorFlow, PyTorch, and MXnet, supporting running within Amazon ECS, Amazon EKS, and Amazon SageMaker services. Performing over 2,000TOPS at sub-millisecond latency, they will be available in early 2020. Learn more about the announcement here.

AWS Nitro Enclaves

AWS Nitro Enclaves provide isolated, secure compute environments attached to your Amazon EC2 instances running in the same Nitro hypervisor card environment as your instances. These specialized, hardened compute environments are designed to protect the most sensitive PII, healthcare, and financial data. They have no administrator, operator, or access; only local-channel communication with your instance. Learn more about AWS Nitro Enclaves here.

Containers

AWS Fargate for Amazon EKS

The popular serverless container hosting option that was built on Amazon ECS to allow running production workloads in containers without worrying about the underlying compute environment now comes to th e Amazon EKS control plane. This allows customers wanting familiar deployment methodologies and management tools to deploy containers to a serverless EKS cluster transparently without first deploying a cluster of workers, worrying about scaling, or maintenance. AWS Fargate for Amazon EKS is available today. Learn how you can get started here.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 3

Modernization Trends

At this point, Andy switched topics to discuss how many companies are actively looking to modernize, and decide what to take and what to leave behind in their move to the cloud. This means moving away from big, legacy systems, the skills for which are quickly dwindling, along with uncontrollable costs, and licensing and related auditing processes that are painful. Here are some of the trends AWS is seeing in the space of modernization.

Mainframes

Mainframe customers are slowly migrating away from mainframes and into the cloud. Some with microservice implementations like Western Union, and some with slower piece by piece workloads and application replacements that relegate the mainframe to non-essential tasks.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 4

Databases

Commercial Relational Databases (RDMS) like the tried and true Oracle and Microsoft SQL ones are increasingly driving customers to look for more open and affordable solutions with less lock-in. As running commercial databases has become more restricted through actions like removing Bring Your Own Licence capabilities, customers have become frustrated with the policies and upkeep of commercial solutions.

One of the downsides to this is that open source solutions require a lot of effort and tuning to get as performant and reliable as their commercial counterparts. This is one reason AWS has put so much effort into Amazon Aurora, and why it is “the fastest-growing AWS service, ever”, with tens-of-thousands of customers, and it is as performant, durable, and available as commercial databases as one-tenth the cost.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 5

Windows to Linux

As with Comercial Relational Database engines, customers see a lot of concerns with price, lock-in and being under the thumb of Microsoft for a single owner/operator for a large component of their application stacks. It’s expensive, limiting, and disconnects them from the fast and vibrant community and security response that comes with Linux. Linux deployments are growing almost 4x the pace of Windows in the Cloud.

Partners Being Chosen

Most ISVs and Service providers are being adapted and aligned to work a single technology platform. All are starting with AWS because the market position is so high, and the customer base is there including providers such as SalesForce, Workday, Splunk, Datadog, and Databricks. Customers can use all these services really easily on the platform when deciding on the value of moving to the cloud.

We’re also seeing a lot of companies using different system integrators when moving to the cloud than they had before. They are choosing System Integrators that are dedicated to the platform, and have trained professionals that have dedicated knowledge of the platform.

“A lot of the heavy lifting of moving Enterprises to the cloud is being done by SIs that have either pivoted their model quicker and realized what the future was. These are companies like Slalom and Rackspace, or born in the cloud SIs who don’t have to worry about cannibalizing their existing business, and who are very happy to pick up the small pilot projects for all of you that don’t pay very much, they don’t seem like they’re worth it. But everybody knows you can’t move unless you get pilots done successfully. So they’re willing to bet on the future. These are companies like Onica..” – Andy Jassy

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 6

Amazon S3 Data Lakes

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 7

Amazon S3 Access Points

Amazon S3 is the most popular storage for data lake objects. It’s cheap, automatically tier-able, can perform batch operations, and even-driven operations, but one of the most difficult points has been data access segregations.

With Amazon S3 Access Points, instead of purely path based policies, new endpoints can be made for any given path, with many access points per bucket. No worrying about an errant role exposing some path to the wrong team or department from the top when only a unique endpoint is shared with a relevant team. These can be used as internal VPC endpoints as well, and scale well for many access points. Learn more about Amazon S3 Access Points here.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 8

Amazon Redshift

Lake House

With the goal of reducing silos of data, or duplication, with Lake House, query data in one place across your Data Warehouse and Data Lake data in Amazon S3.

Federated Query

Released to preview yesterday, Federated Query gives users the ability to query data across live data in Amazon Redshift, Amazon S3 data lake(s), as well as Amazon RDS and Amazon Aurora PostgreSQL. This allows real time data aggregation and simplifies the collation of data across services. Redshift Query Optimizer distributes the queries to the related systems as much as possible to make this as efficient and as fast a process as possible.

Data Lake Export

With Data Lake Export, your Amazon Redshift queries can now be shipped directly to S3 in Apache Parquet format to be consumable in your data lake. In doing so your Amazon Redshift data can be easily analyzed by services such as Amazon SageMaker, Amazon Athena, and AWS Elastic Map Reduce.

Learn more about Federated Query and Data Lake Export here.

Amazon Redshift RA3 Instances with Managed Storage

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 9

Smarter Compute, and decoupling computer capacity with storage capacity is the goal of RA3 instances by utilizing a smart, tiered, ultra-fast SSD and offloading unused or infrequently used data to S3.

“The new Amazon Redshift RA3 Instances with Managed Storage deliver 2x performance for most of our queries compared to our previous DS2 instances-based Redshift cluster. Redshift Managed Storage automatically adapts to our usage patterns. This means we don’t need to manually maintain hot and cold data tiers, and we can keep our costs flat when we process more data.” – Andy Jassy

AQUA – Advanced Query Accelerator

With the smart tiered offloading to Amazon S3 in the new RA instances, AQUA introduces an accelerated cache layer to fill the performance gap, and solve latency issues introduced by tiered storage. AQUA is set to be released mid-2020. Learn more.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 10

Amazon ElasticSearch

Customers love Amazon ElasticSearch, but storing data is expensive at scale, and data is structured to be optimized for search, not for efficiency. By storing less data than desired, key metrics and analytics are missing. Existing Tiered systems work, but are slow and complex.

UltraWarm

With a 90% savings over Amazon ElasticSearch’s hot storage, and an 80% savings over existing warm storage options, UltraWarm introduces a cost effective yet expediant storage option for long term storage to better enable longer term retention, and analytics with more value over a longer period. UltraWarm is available in preview today. Learn more.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 11

Power Packed Databases

Amazon has put a lot of effort in specific purpose-built databases to fit the needs of many use cases. From Time Series and Key/Value store to Document, and Graph, purpose-built datastores can implement many new technologies in a more efficient, effective, and scalable way than a traditional RDMS.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 12

Amazon Managed Apache Cassandra Service (MCS)

At the request of many customers tired of the management, scaling, upgrading and maintaining of Cassandra Databases and in order to bring the ease of other AWS Managed databases to Cassandra users, Amazon announced Managed Apache Cassandra available in preview today. Learn more about Amazon MCS here.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 13

Machine Learning

Machine Learning has seen incredible growth at AWS in recent years. AWS is devoted to not only make Machine Learning more powerful, but more friendly, and build it into services that make sense to make it more approachable and usable for those that want more abstraction. AWS also keeps in mind the goal of making it easier for those data scientists and developers to work with models and code daily.

Frameworks

AWS continues to support multiple frameworks and has teams dedicated to each to improve their performance within AWS. 85% of TensorFlow workloads are on AWS, but 90% of data scientists are using multiple frameworks. Internal teams at AWS have been able to improve performance across the board on these frameworks compared to private teams on closed hardware, as seen in this impressive performance on training mask R-CNN.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 14

Amazon SageMaker Studio

The first Integrated Development Environment (IDE) for ML ever!  This web-based IDE is comprised of many services to make managing code, notebooks, datasets, settings, folders and debugging all of those easier. Build, train, tune and deploy models from a single interface. Share and organize notebooks and projects with others. Learn more about Amazon SageMaker Studio here.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 15

SageMaker Notebook

Take your Notebooks with you across instances. The previously manual, tedious process of managing Notebooks is now automatic in SageMaker Studio. Open Notebooks quickly and efficiently without provisioning instances.

SageMaker Experiments

Making every step of building and tuning models simpler, SageMaker Experiments will save the history of tuning and adjusting models, making your history searchable and easy to revisit. It captures all relevant data of each run, and can compare and visualize the differences with the given parameters anytime.

SageMaker Debugger

Enabled by default, SageMaker Debugger is able to send all relevant metrics for all frameworks to allow you to view and debug what is going on in your running model. It can easily prioritize any dimensions that may be more important than others to further drill down into the process within the notebook in SageMaker Studio.

SageMaker Model Monitor

Sometimes data changes over time as other variables in real life change compared to trained data. With SageMaker Model Monitor, it’s easy to configure and deploy with a single click to detect and visualize drift over time and be alerted when models are drifting from their original accuracy.

SageMaker Autopilot

Newcomers and seasoned ML data scientists alike stand to benefit from SageMaker Autopilot, and it’s ability to work through making 50 different models, automatically transforming and training the model, and delivering notebooks to support the models to be easily used and evolved as needed.

Amazon Powered AI Top-Layer Services

Using the powerful new and existing AI services within the machine learning suite, Amazon has crafted a few new services around how they have used ML within Amazon retail services, and made them easily consumable for AWS customers.

Fraud Detection with Machine Learning

After submitting a simple training set of transaction/user data to AWS with details like email and IP, AWS will build a model and create and expose an API endpoint to be consumed for future fraud score ratings within a customer environment, returning a fraud score for any given transaction with no interactions at the ML layer required. Learn more here.

Amazon CodeGuru

Amazon CodeGuru looks to initially deliver AWS best practice recommendations on analyzing Pull Requests from GitHub and Code Commit for customer projects. By simply adding Amazon CodeGuru as a reviewer, it can leave comments based on a model training against AWS internal code reviews, and code reviews from over 10,000 open source services. This has been used internally at AWS for sometime, and this public exposure of the service allows all engineering teams to easily catch best practices in code review and the PR process.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 16

Further, a CodeGuru Profiler is able to be deployed within applications as an agent that will allow instrumentation to identify and make code recommendations at the operational level, such as identifying most latency and CPU intensive lines of code.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 17

Internal Amazon usage of these services on 80,000 applications has lead to over ten million dollars in savings. Learn more about Amazon CodeGuru here.

Contact Lens for Amazon Connect

Adding machine learning abilities and a pipeline of existing AWS services to Amazon Connect has taken a powerful and efficient phone routing system to track subject call data on transcribed conversations, such as sentiment, duration, and even inferring knowledge gaps based on pauses in the conversation. Full content and metadata search comes to past conversations initially, and real-time analysis and alerting real-time, to identify problem calls as they are happening, and allowing real-time reaction. Learn more about this service here.

Amazon Kendra

With Amazon Kendra, you can connect internal applications such as Exchange, Jira, Confluence, and other unstructured Enterprise data store sources to automatically index, analyze and build better identifiable, searchable, and structured responses to questions and searches against the data. Learn more.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 18

AWS Outposts

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 19

Announced last year, AWS Outposts aim to bring native AWS deployments of a small set of initial supported services to on-premise data centers for workloads that for any reason are not a good fit for the cloud. Learn more about how Onica supports AWS Outposts here.

On-Premise Workloads

Configurable in the console with a click, you can build the makeup of your required Outpost and AWS will build, deliver, install, and maintain your Outposts, available within the console to work with like any other region. It is available initially in AWS Native services today, and later in 2020 with VMWare Cloud on AWS.

AWS Local Zones

Alternatively, a lot of customers across the world require low latency of regional resources near them for some workloads, but don’t want to create and maintain a datacenter. Using Outposts as a building block, AWS has created Local Zones: small single centers of compute, storage, and databases. Available today is the invite-only Los Angeles Local Zone. This zone provides regionally localized resources and networking to the Southern California area.

Mobile Workloads

Bringing onstage the CEO of Verizon, Hans Vestberg, to discuss 5G as it relates to data usage and latency, for all the bandwidth available, routing to cloud services still requires many hops, and a lot of latency before getting to the internet and thus the Cloud.

AWS Wavelength

There are a lot of challenges to putting data on the edge to make low-latency solutions to bypass the topology issues. AWS is introducing today AWS Wavelength, to move workloads to Wavelength Zones that exist with connectivity directly in the city aggregation point of the 5G network. Initially, this will be released in the US partnered with Verizon, and the EU on Vodafone.

AWS re:Invent 2019 Recap: Andy Jassy’s Keynote Event 20

After all of that, Andy shared one more quick recap of how much change the AWS cloud can offer, and left us all with spinning heads about these new services and resources.

Stay tuned for more highlights and recaps with Onica so you don’t miss out on the action!

Hidden layer

Share on linkedin
Share on twitter
Share on facebook
Share on email

Onica Insights

Stay up to date with the latest perspectives, tips, and news directly to your inbox.

Explore More Cloud Insights from Onica

Blogs

The latest perspectives on navigating an ever-changing cloud landscape

Case Studies

Explore how our customers are driving cloud innovation in their industries

Videos

Watch an on-demand library of cloud tutorials, tips and tricks

Publications

Learn how to succeed in the cloud with deep-dives into pressing cloud topics