Timescale Cloud provides a balance between a familiar developer platform and a flexible database for modern applications, while retaining the ease and scalability of modern cloud services.
All data is time-series data.
That was our mantra when we launched TimescaleDB 4.5 years ago. All data has a time dimension, yet most databases only provide a static snapshot of the current state. Instead, by storing data along that time dimension (“time-series data”), we get the dynamic view of what is happening right now, how that is changing, and why that is changing. We get to watch a movie, not just a static image.
Similarly, we started this company with the conviction that PostgreSQL is the best foundation for applications. PostgreSQL is proven, versatile, and extensible. It has the fastest growing open-source database community today. It has a broad ecosystem of tooling, connectors, libraries, visualization applications, and more. And most importantly, PostgreSQL is boring: you want your database to be like your Internet, it should be fast, reliable, and just work.
Time has proven that we were correct in these beliefs. Today, Timescale users are pushing the envelope across every industry, including companies like: Akamai, Bosch, Cisco, Comcast, Credit Suisse, DigitalOcean, Electronic Arts, HPE, IBM, Microsoft, Nutanix, NYSE, OpenAI, Rackspace, Schneider Electric, Samsung, Siemens, Uber, Walmart, Warner Music, and many more. The vibrant Timescale community now runs over 3 million active databases every month, enabling them to measure everything that matters across a myriad of use cases, like software applications, industrial equipment, financial markets, blockchain activity, consumer behavior, machine learning models, and climate change, to name but a few.
But developers’ preferences continue to evolve. In particular, developers now increasingly turn to managed database services, instead of running their databases in-house. We recognized this shift last year, when we went “all in” on the cloud by making all our previous enterprise features free for the community, and choosing to only monetize via our managed cloud services. Thanks to the success of that move, we raised $40 million in an oversubscribed round last May, led by Redpoint Ventures (investors in Snowflake, Twilio, Stripe, HashiCorp, Heroku), who called our cloud business “one of the fastest-growing database businesses we have seen in the past 20+ years.”
Managed database services are the future, yet we find today’s landscape of managed database services to be lacking. At one extreme, we see database-as-a-service (DBaaS) offerings that still maintain the server metaphor of the self-managed world. These “cloud databases” compete on price and performance, yet the developer experience is often an afterthought. At the other extreme, serverless data platforms optimize for the developer experience, but as a consequence they hide the database and underlying software architecture behind opaque APIs, leading to more developer confusion and vendor lock-in. (And they end up nickel-and-diming developers for every little insert and query.)
We believe there’s a better way. The future is serverless, but not database-less. Developers want a truly easy and worry-free experience, but shouldn’t have to blindly trust a black box for their core applications. Developers want a database that is easy to get started, easy to use, and easy to scale so they can focus on their applications. But they also want to understand, diagnose, and even tinker with their database. Thus, the modern database needs to remain familiar and flexible.
Today, we are sharing our vision for a modern data platform that combines the ease of use expected from a cloud service with the flexibility that developers need. We call this the database cloud.
The database cloud provides developers with a familiar interface so that they can bring their existing skills to solve the complex problems of today, be it in financial services, web-scale infrastructure, IoT, or more. The database cloud gives developers the ability to dig deep into the internals of the database and truly understand how their data is managed.
If “black box” services require developers to sacrifice this understanding, then the “transparent box” developers get with the database cloud empowers them.
We’re also announcing the new Timescale Cloud, a database cloud for relational and time-series workloads, built on PostgreSQL, and architected around this new vision. Timescale Cloud is not a database that somebody else manages on somebody else’s cloud infrastructure (“a cloud database”), but a full cloud experience built around and for databases (“a database cloud”).
These are the first announcements of many this month, our second launch month of the year. This past May, we kicked off our first launch month – an ambitious effort to execute 10+ launches throughout the month (#AlwaysBeLaunching) – and it was a huge success.
This month, we are kicking off a second launch month, with another 10+ launches, starting with this blog post today.
To learn more about the new Timescale Cloud, and our new vision for the future of database services in the cloud, please continue.
Or to try Timescale today (for free!), please sign up here.
The database cloud: serverless, but not database-less
At Timescale, we are dedicated to serving developers worldwide, enabling them to build exceptional data-driven products that measure everything that matters: software applications, industrial equipment, financial markets, blockchain activity, consumer behavior, machine learning models, climate change, and more.
At the core of these data-driven products are great databases. TimescaleDB (aka “Postgres for time-series”) is our core product: 100% free, open source (“open core” to be precise, and all on GitHub), and built on Postgres. Developers who use TimescaleDB get the benefit of a purpose-built time-series database, plus a classic relational (Postgres) database, all in one, with full SQL (not “SQL-like”) support. And with our scale-out multi-node functionality introduced in TimescaleDB 2.0, TimescaleDB now powers petabyte-scale workloads. A relational database that scales, all for free.
A funny thing, though: if databases are doing their job correctly, much like your Internet connection (or your plumbing), you shouldn’t have to think about it. You should be able to leverage skills and knowledge you already have, including query languages you already know (i.e., SQL), and existing tools and applications that just work. You shouldn’t have to worry about reliability, availability, provisioning, backups, resource scaling, performance optimizations, or configuration tuning. Not just today when starting off, but also in the future as your workload scales or use cases grow more complex.
The evolution from a self-managed database to a managed database service has been a big step forward for this worry-free experience. Yet DBaaS services that maintain a strong physical server metaphor still offload much of an “easy” and “scaling” burden to developers, who need to endlessly tweak hardware instance and cluster configurations to achieve the scale that modern applications demand. That’s where the serverless paradigm comes in (and why it has taken off). Serverless data platforms, at least in theory, free the developer from thinking about scaling and appear easy to start.
Where today’s serverless data platforms fall short
Serverless data platforms and APIs are essentially SaaS services. At first they seem deceptively simple. You get a network endpoint you can GET and POST. Everything seems to go fine at first as you onboard onto a new platform in their preferred manner.
But as you dig deeper, you realize these APIs are not familiar. You soon realize that there are new APIs or query languages to learn, and new proprietary tools to adopt.
New APIs are not just a problem during the learning phase: they also mean vendor lock-in to a proprietary ecosystem. And proprietary ecosystems typically make it hard to export your data.
Also, these proprietary SaaS services are often “black boxes”, where your only visibility is the external API, but you have a murky (at best) understanding of the underlying architecture.
So the honeymoon period ends quickly, often when you introduce a new type of query, or new workload, or larger data volume, or increased insert or query rate. Or when something even worse happens: the platform starts behaving differently than before, but nothing has changed.
Performance debugging is always challenging, but near impossible when you have no lower-level visibility into the underlying system and no mental model of how the underlying systems work.
This frustration with black-box services is real. Indeed, when our engineers were trying to benchmark AWS Timestream (compared to TimescaleDB), this same problem reared its ugly head: AWS Timestream significantly underperformed our expectations (and many others saw similar issues), yet our engineers had no idea if AWS Timestream’s performance could be improved, let alone how to do so.
Yes, SaaS vendors can blog about their underlying services, but even then, the underlying architecture is probably some complex, polyglot, Rube-Goldberg-like microservice architecture that was never designed for familiarity or understandability.
And the general lack of flexibility with serverless data platforms confounds developers as they grow to more complex, production use cases. While many developers might have the same 70-80% needs, it’s the 20-30% that always differ. But enabling this long tail of needs isn’t about just adding more UI buttons to press, it’s about a software architecture that is flexible – that can be customized and optimized for developers’ use cases, yet in a way that a developer can understand. That’s not the case for a SaaS architecture with dozens of subtly interacting microservices.
So today’s serverless data platforms are not familiar or flexible. But further, black boxes are never truly easy and worry free: you never know if there are any skeletons lurking in the proverbial closet, just waiting to cause your service to fall over. Or if they will surprise you with unpredictably high costs given all the hidden and opaque charges that often go into monthly consumption charges.
Our vision: the database cloud
We’ve alluded to what developers want in their cloud databases: Easy, scalable, familiar, and flexible. Let’s unpack what those mean.
Easy means being able to start with a single click, and then not having to worry about resource sizing, configurations, scale limitations, performance, failures and recovery, upgrades or versioning, security, and more. The service should just work. It should feel easy both to get started and to grow with, both for beginners and power users. Serverless data platforms might deliver on “easy for beginners”, but their black-box abstractions fall short on “easy for power users”.
Scalable means the platform should scale arbitrarily with need. But scalability is not only about resources (data volumes, ingest or query rates, or even more subtle issues like data cardinality). It’s also about scaling organizationally. It should supercharge developer productivity, and grow easily with workflows: from dev/test environments, to production deployments, to sharing data insights across teams, and to multiple applications and use cases within an org. And scalability is finally about cost effectiveness – being able to achieve the most performance while keeping your workload within a reasonable budget, including as your workload grows. Today’s serverless platforms scale infrastructurally but not organizationally, nor cost effectively.
Familiar means not needing to learn a new query language or set of APIs, nor adopt proprietary connectors or tools, nor try (and likely fail) to understand a whole new architecture. Familiarity is how the database scales organizationally, when many developers, product and business owners, and others can already use it. Familiarity also implies that your developers have (or can easily pick up) a clear mental model of the architecture, where they can understand how data is stored, indexed, and processed, they can know when they should be worrying about the service, and what they can do about it. Serverless platforms are often built around new, custom, proprietary APIs – they are not simply not familiar.
Flexible means the database works for more than some limited use case, or more than just under narrow operating conditions, but is a horizontal platform that developers can customize to their needs. It’s not only key-value lookups or basic built-in functions, but supports powerful, rich queries and analytics. It isn’t limited to only storing floats for metrics or in-line labels for tags, but many data types, formats, schemas, indexes. It allows developers to easily trade-off between cost and performance, and to better structure or distribute their data based on need. Flexibility is how the database scales with new use cases: It doesn’t perform well only in some narrow operating conditions, but across a set of applications and workloads. Which also means that expertise gained on one project can be carried forward to the second, third, and tenth projects. Given their lack of such flexibility, serverless platforms fall short here as well.
We think of such services as a “transparent box”; easily packaged and accessible to get started, yet with a transparency that provides the familiarity and flexibility as you scale. It’s easy and worry-free not just when starting off, but forever, as your workloads and use cases grow.
Introducing the new Timescale Cloud
(Some of you may remember that we launched the first “Timescale Cloud” 2.5 years ago, as the world’s first fully-managed time-series database-as-a-service on AWS, GCP, Azure. That product is alive and well, and fully supported as before, but is now called “Managed Service for TimescaleDB”.)
Today we are announcing the new Timescale Cloud (formerly known as “Timescale Forge”), a database cloud for relational and time-series workloads, built on PostgreSQL, and architected around this vision of the “transparent box”.
Timescale Cloud combines the best of the DBaaS and serverless SaaS data platforms.
Unlike DBaaS services, Timescale Cloud is easy and scalable. The platform is built around a modern cloud architecture, with compute and storage fully decoupled. All storage is replicated, encrypted, and highly available: Even if the physical compute hardware fails, the storage stays online and the platform immediately spins up new compute resources, reconnects it to storage, and quickly restores availability. Users can independently resize and scale compute and storage based on their needs (and budget), or set the platform to autoscale storage with their consumption (with autoscaling compute in the works). Whenever a database’s resource configuration changes, the platform automatically re-tunes a user’s database and optimizes it for the new configuration. It’s easy to get started and scale with need.
Unlike pure serverless data platforms, Timescale Cloud is familiar and flexible. It allows developers to build on skills and knowledge they already have with databases. It’s full SQL (not a “SQL-like variant”), the query language they and other teams already know. It works with all the tools, connectors and ORMs, and applications they already use. And it is built on Postgres, so a developer familiar with Postgres (or relational databases more generally) will immediately understand how to use, EXPLAIN, diagnose, and optimize their data models and queries on TimescaleDB. Developers can be immediately productive.
In short, a coherent architecture like TimescaleDB, building atop decades of Postgres open-source development and fitting the mental model of developers, enables this in ways that a SaaS platform with an opaque, overly complex software architecture never does.
And even though this is the first announcement of our new vision, this product has already been available for the last year (formerly known as “Timescale Forge”), powering production workloads for companies worldwide, including: analytics for 10 public transit agencies, music streaming services, smart agriculture, SaaS billing, customer and marketing platforms, building management, supply-chain logistics, real estate, crypto, and many others.
"Our experience using Timescale Cloud has been fantastic. We're really impressed with the core technical innovations, particularly around hypertables and compression, and it solves a lot of problems for us as we work to build aggregated and derived data sets on top of our core tables. We're excited to see what Timescale continues to build in the future!”
- Adam Inoue, Messari (Case Study)
"Timescale Cloud has been a game-changer for us at Enfinite. Specifically, the Continuous Aggregates and Compression features made querying and storage of large volumes of high-frequency IoT data effortless. The platform’s scalability and ease of use clearly makes it a long-term solution for our database needs."
- Varun Rai, CTO and co-founder of Enfinite Technologies
"Timescale Cloud is really helping us scale. Our workload includes queries across both historical and real-time data and our volume keeps growing and growing, so we were struggling getting enough query speed with vanilla Postgres. Timescale Cloud gives us better performance at a much more cost-effective price point than any other solution we looked at."
- Elango Thevar, CEO and co-founder of Neer
Timescale Cloud is the database cloud for time-series and provides all the goodness of TimescaleDB, but now as part of a “transparent box” that’s just one click away. It includes:
- Decoupled compute and storage for maximum flexibility and cost-effectiveness
- Compute from 0.25 vCPU to 32 vCPU for a wide-variety of workloads
- Storage volumes from 10GB to 16TB per compute node, all with built-in replication
- Effective storage of 100TB+ per node, or petabyte-scale for multi-node deployments, via best-in-breed compression for 94-97% space savings
- High availability via instant recovery at no additional cost
- One-click database pause and resume
- Autoscaling storage with zero downtime and configurable limits
- Automated database configuration, yet with fine-grained power-user control
- Automated database re-tuning, whenever resource configurations change
- Automated data retention policies for easy data lifecycle management
- Automated continuous aggregates to power dashboards and monitoring applications
- Automated user-defined actions for in-database job scheduling
- Point-in-time recovery via automated, continuous incremental backups
- Automated, zero-downtime upgrades during maintenance windows
- Explorer dashboard for an easy, visual in-console interface
- Data encrypted at rest and in transit
- Flexible role-based database access control
- Flexible VPC peering (one-click service migrations from public, dev, test, prod VPCs)
- Platform observability via metrics and logs
- Database observability into internal statistics, jobs, locking, and more
- Query observability via plan- and execution-time EXPLAINs
- Close to 40 popular PostgreSQL extensions
- Works with any PostgreSQL ORM, connector, or tool
- Top-rated, highly-technical support team, available 24/7
- Fully transparent pricing, with fine-grain pricing shown alongside all resource (re)configurations
- Plans starting at $24/month (or 3¢ per hour) with usage-based pricing
During Launch Month this October, Timescale Cloud will be getting some great new additions:
- First-class support for multi-node TimescaleDB services, with one-click, fully-managed service creation and configuration
- New AWS regions across North America and Europe
- One-click database forking, to easily spin up copies of your database for development, testing, and non-prod access for data science teams
- Automated out-of-memory query protection, to eliminate service instability from run-away complex queries
- Advanced billing features, including historical invoicing and configurable billing emails for finance departments
Beyond these new capabilities for this month, the Timescale Cloud team is also hard at work on additional capabilities to ship later this quarter, including:
- Programmatic APIs for flexible control over cloud services, and easily integrate Timescale Cloud services into CI/CD pipelines and “infrastructure as code” settings
- Multi-node elasticity, to support scaling multi-node services up and down with automated data rebalancing
- High availability via service replicas, with automated failure detection and zero downtime failover
- Read-only end-points for service replicas, in order to scale read queries while ensuring performance isolation for high-ingest workloads
- Multi-user projects, for easier collaboration across teams
And just wait to see what we have planned for 2022, continuing in our vision of the transparent box for the database cloud. Even easier, more scalable, more familiar, and more flexible.
To all the Timescale community members running the 3+ million active TimescaleDB databases today, we thank you again for your support and feedback. We realize how important your data and applications are, and take that trust seriously.
To everyone who is not yet a user, and is looking for a database cloud for your relational and time-series workloads, we invite you to try Timescale Cloud for free today.
If you'd like to connect with Timescale community members, get expert tips, advice and more, tune in to Timescale Community Day later this month for talks (and demos!) about time-series data and TimescaleDB tips.
And, for those who are passionate about data, databases, and delighting developers, and interested in joining a fully-remote, global team: learn about our open positions here. We are hiring broadly across many roles.
To the stars! In a transparent, boxy rocket ship! 🐯 🚀