DevOps Kubernetes

ScaleOps Secures $21.5 Million to Revolutionize Cloud-Native Resource Management

Yodar Shafrir 19 December 2023 3 min read

Today, we are excited to announce a significant achievement for ScaleOps – we have successfully raised $21.5 million in total funding. We’re thrilled to be working with LightSpeed Venture Capital, who led this round, and participating investors NFX and Glilot. 

This is an amazing milestone for ScaleOps, and I couldn’t be more proud of what we’ve achieved in such a short time. So, how did we get to where we are today, and where are we headed?

We’re on to something big: Run-time Automation of Cloud-Native Resource Management 

As cloud-native environments become increasingly dynamic and subject to constant change in demand, managing cloud resources has become extremely complex, challenging, and tedious to maintain.

The configurations for container sizing, scaling thresholds, and node type selection are static, while consumption and demand are dynamic. This means engineers are required to manually adjust cloud resources to meet fluctuating demand and avoid under or overprovisioning, resulting in millions of dollars being wasted on idle resources or poor application performance issues during peak demand. 

For production environments, every container requires a different scaling strategy. Experienced engineers spend hours trying to predict demand, running load tests, and tweaking configuration files for each container. Managing this at scale is nearly impossible. 

Even if the engineers decide to take action, all the existing solutions have the same problem. The resource allocation is static, while the resource consumption is dynamic and constantly changing. 

We realized the only way to free engineers from ongoing, repetitive configurations and allow them to focus on what truly matters is by completely automating resource management down to the smallest building block: the single container in run time. 

We built a context-aware platform that can automatically optimize these constantly changing environments, adapting to changes in demand in real-time. ScaleOps leverages cutting-edge AI algorithms to analyze, predict, and automatically allocate resources according to demand, ensuring optimized application performance without any manual intervention.

Our vision is clear – ScaleOps empowers engineers with a platform that automatically streamlines resource management, enhances the overall efficiency of cloud-native applications, and ensures cost-effectiveness. 

It’s all about the team

So how did we get to where we are so fast? 

What sets ScaleOps apart is not just the great product we built, but also our strong team and great culture. And we couldn’t get to where we are without this team.

We move fast. We make fast decisions and believe in simplicity. Above all, we are obsessed with our customers and delivering fast to their biggest pain points and challenges. 

Our team has grown to over 30 people, spread geographically between Tel Aviv and distributed around the US. Our next step is scaling up further: investing in R&D to improve and expand our product offering and growing our go-to-market team to deepen our market presence.

A huge thank you to all our investors, partners and customers – this achievement is as much yours as it is ours. 

We can’t wait to see what’s next.

Related Articles

Kubernetes Workload Rightsizing: Cut Costs & Boost Performance

Kubernetes Workload Rightsizing: Cut Costs & Boost Performance

In the rapidly changing digital environment, Kubernetes has become the go-to platform for managing and scaling applications. However, achieving the ideal balance between performance and cost efficiency remains a challenge. Misconfigured workloads, whether over or under-provisioned, can result in wasted resources, inflated costs, or compromised application performance. Rightsizing Kubernetes workloads is critical to ensuring optimal resource utilization while maintaining seamless application functionality. This guide will cover the core concepts, effective strategies, and essential tools to help you fine-tune your Kubernetes clusters for peak efficiency.

Kubernetes In-Place Pod Vertical Scaling

Kubernetes In-Place Pod Vertical Scaling

Kubernetes continues to evolve, offering features that enhance efficiency and adaptability for developers and operators. Among these are Resize CPU and Memory Resources assigned to Containers, introduced in Kubernetes version 1.27. This feature allows for adjusting the CPU and memory resources of running pods without restarting them, helping to minimize downtime and optimize resource usage. This blog post explores how this feature works, its practical applications, limitations, and cloud provider support. Understanding this functionality is vital for effectively managing containerized workloads and maintaining system reliability.

Top 8 Kubernetes Management Tools in 2025

Top 8 Kubernetes Management Tools in 2025

Kubernetes has become the de facto platform for building highly scalable, distributed, and fault-tolerant microservice-based applications. However, its massive ecosystem can overwhelm engineers and lead to bad cluster management practices, resulting in resource waste and unnecessary costs.

Schedule your demo