On January 18th 2012, Jeff and Werner announced the general availability of Amazon DynamoDB, a fully managed flexible NoSQL database service for single-digit millisecond performance at any scale.
During the last 10 years, hundreds of thousands of customers have adopted DynamoDB. It regularly reaches new peaks of performance and scalability. For example, during the last Prime Day sales in June 2021, it handled trillions of requests over 66 hours while maintaining single-digit millisecond performance and peaked at 89.2 million requests per second. Disney+ uses DynamoDB to ingest content, metadata, and billions of viewers actions each day. Even during unprecedented demands caused by the pandemic, DynamoDB was able to help customers as many across the world had to change their way of working, needing to meet and conduct business virtually. For example, Zoom was able to scale from 10 million to 300 million daily meeting participants when we all started to make video calls in early 2020.
On this special anniversary, join us for an unique online event on Twitch on March 1st. I’ll tell you more about this at the end of this post. But before talking about this event, let’s take this opportunity to reflect back on the genesis of this service and the main capabilities we added since the original launch 10 years ago.
The History Behind DynamoDB
The story of DynamoDB started long before the launch 10 years ago. It started with a series of outages on Amazon’s e-commerce platform during the holiday shopping season in 2004. At that time, Amazon was transitioning from a monolithic architecture to microservices. The design principle was (and still is) that each stateful microservice uses its own data store, and other services are required to access a microservice’s data through a publicly exposed API. Direct database access was not an option anymore. At that time, most microservices were using a relational database provided by a third-party vendor. Given the volume of traffic during the holiday season in 2004, the database system experienced some hard-to-debug and hard-to-reproduce deadlocks. The e-commerce platform was pushing the relational databases to their limits, despite the fact that we were using simple usage patterns, such as query by primary keys only. These usage patterns do not require the complexity of a relational database.
At Amazon and AWS, after an outage happens, we start a process called Correction of Error (COE) to document the root cause of the issue, to describe how we fixed it, and to detail the changes we’re making to avoid recurrence. During the COE for this database issue, a young, naïve, 20-year-old intern named Swaminathan (Swami) Sivasubramanian (now VP of the database, analytics, and ML organization at AWS) asked the question, “Why are we using a relational database for this? These workloads don’t need the SQL level of complexity and transactional guarantees.”
This led Amazon to rethink the architecture of its data stores and to build the original Dynamo database. The objective was to address the demanding scalability and reliability requirements of the Amazon e-commerce platform. This non-relational, key-value database was initially targeted at use cases that were the core of the Amazon e-commerce operations, such as the shopping basket and the session service.
AWS published the Dynamo paper in 2007, three years later, to describe our design principles and provide the lessons learned from running this database to support Amazon’s core e-commerce operations. Over the years, we saw several Dynamo clones appear, proving other companies were searching for scalable solutions, just like Amazon.
After a couple of years, Dynamo was adopted by several core service teams at Amazon. Their engineers were very satisfied with the performance and scalability. However, we started to interview engineers to understand why it was not more broadly adopted within Amazon. We learned Dynamo was giving teams the reliability, performance, and scalability they needed, but it did not simplify the operational complexity of running the system. Teams were still needed to install, configure, and operate the system in Amazon’s data centers.
At the time, AWS was proposing Amazon SimpleDB as a NoSQL service. Many teams preferred the operational simplicity of SimpleDB despite the difficulties to scale a domain beyond 10 GB, its non-predictable latency (it was affected by the size of the database and its indexes), and its eventual consistency model.
We concluded the ideal solution would combine the strengths of Dynamo—the scalability and the predictable low latency to retrieve data—with the operational simplicity of SimpleDB—just having a table to declare and let the system handle the low-level complexity transparently.
DynamoDB was born.
DynamoDB frees developers from the complexity of managing hardware and software. It handles all the complexity of scaling partitions and re-partitions your data to meet your throughput requirements. It scales seamlessly without the need to manually re-partition tables, and it provides predictable low latency access to your data (single-digit milliseconds).
At AWS, the moment we launch a new service is not the end of the project. It is actually the beginning. Over the last 10 years, we have continuously listened to your feedback, and we have brought new capabilities to DynamoDB. In addition to hundreds of incremental improvements, we added:
Support for local and global secondary indexes to enable more complex query capabilities, without compromising on scale or availability (December 2013)
The possibility to capture changes at scale with DynamoDB Streams (November 2014) and later with Amazon Kinesis Data Streams for DynamoDB (November 2020)
The ability to create global tables and replicate your data across AWS Regions (November 2017). This allowed you to create active-active applications hosted in multiple Regions. A DynamoDB global table consists of multiple replicas in multiple Regions. When an application writes data to a replica table in one Region, DynamoDB propagates the write to the other replica tables in the other Regions automatically.
A backup and restore capability for DynamoDB for long-term retention, and archiving for regulatory compliance needs. (November 2017)
Point-In-Time Recovery (PITR). It allows you to backup your table with the ability to restore at any second in time with a fully consistent version of the data (March 2018)
Adaptive capacity to allow to run imbalanced workloads indefinitely (August 2018)
Support for ACID transactions (November 2018)
Integration with AWS Backup (November 2021)
… and many more.
Lastly, during the last AWS re:Invent conference, we announced Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). This new DynamoDB table class allows you to lower the cost of data storage for infrequently accessed data by 60%. The ideal use case is for data that you need to keep for the long term and that your application needs to occasionally access, without compromising on access latency. In the past, to lower storage costs for such data, you were writing code to move infrequently accessed data to lower-cost storage alternatives, such as Amazon Simple Storage Service (Amazon S3). Now you can switch to the DynamoDB Standard-IA table class to store infrequently accessed data while preserving the high availability and performance of DynamoDB.
How To Get Started
To get started with DynamoDB, as a developer, you can refer to the Getting Started Guide in our documentation or read the excellent DynamoDB, Explained, written by Alex DeBrie, one of our AWS Heroes, and author of The DynamoDB Book. To dive deep into DynamoDB data modeling, AWS Hero Jeremy Daly is preparing a video course “DynamoDB Modeling for the rest of us“.
Customers now leverage DynamoDB across virtually any industry vertical, geographic area, and company size. You are continually surprising us with how you innovate on DynamoDB, and you are continually pushing us to continue to evolve DynamoDB to make it easier to build the next generation of applications. We are going to continue to work backwards from your feedback to meet your ever evolving needs and to enable you to innovate and scale for decades to come.
A Decade of Innovation with DynamoDB – A Virtual Event
As I mentioned at the beginning, we also would love to celebrate this anniversary with you. We prepared a live Twitch event for you to learn best practices, see technical demos, and attend a live Q&A. You will hear stories from two of our long-time customers : SmugMug CEO Don MacAskill, and engineering leaders from Dropbox. In addition, you’ll get a chance to ask your questions to and chat with AWS’ blog legend and Chief Evangelist Jeff Barr, and DynamoDB‘s product managers and engineers. Finally, AWS heroes Alex DeBrie and Jeremy Daly will host two deep dive technical sessions. Have a look at the full agenda here.
This will be live on Twitch on March 1st, you can register today. The first 1,000 registrants from US will receive a free digital copy of the DynamoDB book (this has a $79 retail value).
To DynamoDB’s next 10 years. Cheers .