We developed a pricing model that puts the data engineer first, and rewards using data best practices: you pay only for the workloads you run, no matter how much the data volume grows.
In our experience, data teams outgrow traditional EL tools’ entire approach to data movement, including their pricing models. These have historically been based on the (monthly) data volume in rows or GBs, which may work well when an organization is just starting out, but eventually penalizes growth by increasing cost faster than the perceived value. It can even prevent new medium-to-high-volume sources like databases and event streams from being added at all.
With lots of input from our community, we have landed on a pricing model that prioritizes fairness, predictability, and transparency and sits closer to the underlying compute resources. On Meltano Cloud, you can run a pipeline built out of hundreds of connectors, Python data tools, and custom scripts, and pay only based on the resources the pipeline used.