Logs, traces, and metrics all represent something happening at a specific time.
And at my company, we treat them as such.
Technically, they're all events, and I'll explain why.
My Observability Experience
As the head of a logging product in one of the large observability companies, I witnessed firsthand the challenges our customers faced with the sheer volume of data they sent us.
They needed more control, but every time I brought ideas to leadership that gave customers more control over their data (and, in turn, more control over their spend), I was shut down.
Observability is now the second most expensive part of running a software business, growing at a staggering 30% year-over-year.
(In 2024, Datadog gave a customer a bill for $65 Million—so much they had to address it on an earnings call.)
"Observability is now the second most expensive part of running a software business, growing at a staggering 30% year-over-year."
The problem is: The value companies get from observability isn't growing at the same rate as the cost of observability.
It’s nowhere close to what they’re charging.
And to make matters worse, vendors control the collection and harvesting of all the data they're charging you for. They have control over your knobs and levers.
Here’s how we solved that at my company.
Why Everything Is An Event
Log data, trace data, metrics—we treat everything as an event.
As CEO, I believe all data is a representation of something that happened at a specific moment in time. An event, as it were.
At Datable, we take data and normalize events into the OpenTelemetry standard, creating a single, unified playing field for all of our data, regardless of origin.
Since data comes from various sources, we build our software to support a wide range of formats and protocols. We support Syslog, Json, and Fluent, but we also support things like the open source vendor protocols like the New Relic wire protocol and the Datadog wire protocol.
By normalizing your data, you can ensure consistency across the board.
No more dealing with "user_ID" in one place and "userID" in another.
"No more dealing with "user_ID" in one place and "userID" in another."
This also makes it really easy for us to go and send the data out to third party vendors. If we need to send something out in the Splunk or New Relic format, it’s not a problem.
We take it from that OTel format, transform it into their API, and send it on its way.
How I Addressed It at My Company
Everyone should have the opportunity to create a solid data pipeline.
So we created a no-code experience that allows anyone—PMs, SREs, BI professionals—to transform, enrich, and route data with just a few clicks.
When SREs build pipelines with Datable, they can trust that their data is secure and will always end up where it needs to go.
When Product Managers create a pipeline, role-based access ensures that no one will modify their pipeline without permission.
Each person can build the data pipeline they need to succeed. That was the most straightforward, first-principles way of handling it.
Quick Tips to Save on Monitoring
Not everything has to go to Datadog or New Relic.
You can “turn down” the amount of data you’re sending to your vendor.
Instead, try sending everything to S3, and reduce the volume of data you're sending to the vendor.
Instant savings, with a knob you control.
My second recommendation is something I saw a customer do, using Datable filtering and sampling.
They used our robust state management to implement tailored sampling based on trace data.
They can now adjust the level of visibility based on if a user is a free customer, or a paying customer.
Clearly, they send in more data for paying customers to gain deeper insights into their user experience.
In the future, everything is an event
By treating logs, traces, and metrics as events and normalizing them into a single standard, Datable gives companies the power to control their data and their costs.
We help teams create data pipelines that are actually useful to them, and any team member can create one.
Peel off the data you want, and route it anywhere.
You can even use Datable to have a live bake-off, comparing two services simultaneously without interruption.
New Relic and Datadog users save an average of 35% on observability when using Datable.
Imagine the impact cutting a third of your observability budget would have.