Manual data management is no longer an option. Modern systems demand speed, structure, and automation. In this interview, Site Reliability Engineer Vladyslav Haina discusses the role of DataOps and DevOps in today’s infrastructure, his experience leading automation at Deutsche Bank, the key challenges in the field — and why, despite all of that, he believes some areas should still stay manual.
Vladyslav, from your experience, how are DevOps and DataOps practices transforming internal processes within companies today? Why are these approaches becoming an essential part of modern infrastructure rather than just a nice-to-have?
DevOps and DataOps have fundamentally changed the way organizations handle infrastructure and data management. In team settings, these practices break down silos across development, operations, QA, and data teams — enabling faster iteration cycles, more reliable deployments, and consistent quality.
In the past, deploying a new version of an application or managing a data pipeline was a manual, high-risk process. Today, a combination of infrastructure as code (IaC), CI/CD pipelines, and observability helps organizations increase deployment frequency, reduce change failure rates, and respond to incidents faster.
They are no longer “nice-to-haves” — today’s systems are simply too complex to manage manually. The adoption of these practices is key to the speed, scalability, and stability of modern digital transformation.
Could you share a concrete example of how implementing DataOps or DevOps practices at Deutsche Bank led to measurable business impact or increased system resilience? For example: Chaos Engineering, CI/CD automation, Data Pipelines
At Deutsche Bank, where I led the implementation of TOIL automation, this update proved to be a game changer. Picture this: we reduced deployment times from days to minutes by implementing CI/CD pipelines for both microservices and data pipelines. Among the most impactful projects was integrating Chaos Engineering principles into our cloud deployments. We actively tested the resilience of our systems under failure scenarios, ultimately improving our incident response SLAs by more than 40%.
On the DataOps side, we standardized ETL pipelines using a workflow orchestration system and distributed data processing framework, combined with automated data quality checks. This prevented bad data from entering critical systems, significantly reducing errors in our risk and reporting systems.
I know that at SolidMinds you were building infrastructure almost from scratch. What were the biggest challenges in automating processes and integrating data engineering into the infrastructure? And what helped you overcome them?
One of the biggest challenges was aligning data workflows with modern software development lifecycles. In many organizations, data processes still exist in their own universe — managed manually, with loose documentation, and prone to silent failures. In today’s world, this simply isn’t acceptable. Integrating DataOps into the infrastructure means bridging this divide and treating data pipelines as they should be — like production code.
Technically, data quality monitoring, schema versioning, and environment consistency posed the greatest challenges. In production, pipelines that worked in development often failed due to subtle differences in data volume, latency, or schema drift. To address this, we had to build validation, alerting, and rollback capabilities — essentially applying CI/CD principles to data.
Another challenge was orchestrating multiple tools and platforms — ETL frameworks, cloud storage, databases, and metadata catalogs — into a cohesive and automated pipeline. Consider each team using its own tools and standards. Creating a shared ecosystem with clear ownership took time and diplomacy.
The key to our success was adopting a modular approach. First, we containerized the data processing jobs. Then, we standardized pipelines using Airflow and DataProc. Additionally, we implemented metadata tracking and lineage auditing with tools like OpenLineage.
In my experience, the success of data projects largely depends on mindset. The first rule is to treat data as a product. The second is embedding data engineers into dev teams to enforce shared accountability.
In your view, how does working with DevOps and DataOps in fintech or startup environments differ from working with large enterprise systems like Deutsche Bank? Where do engineers face the toughest challenges?
I believe each environment comes with its own set of challenges and advantages. In fintech or startups, there’s more freedom — the ability to move quickly, adopt cutting-edge tools, and experiment without multiple layers of approval. But with that freedom comes the responsibility to prepare for challenges that can hit hard if you’re not ready. For example, the lack of guardrails such as monitoring, security, compliance, and scalability.
In large enterprises, things move more slowly, but for good reason. Systems are complex, data is regulated, and uptime is non-negotiable. Engineers here face challenges related to scale and compliance, often working on modernizing legacy systems — which can feel like changing tires on a moving car.
Each environment has its own unique complexities. But one challenge stands out as the toughest in both: aligning stakeholders around change.
And on a personal level — what’s closer to your heart: the “controlled chaos” of a startup or the structured processes of a large enterprise?
I’d say each environment has its own charm. That said, I lean slightly toward the “controlled chaos” of startups. There’s a unique thrill in building something from the ground up, wearing multiple hats, and solving a variety of problems creatively under pressure. You get to innovate, fail fast, and iterate.
That being said, I also appreciate the structure and rigor of large enterprises — especially when you see how a well-designed CI/CD pipeline or SRE practice can make systems resilient at scale. But at heart, I’m drawn to the speed and dynamism of startup life.
You work a lot with automation, machine learning, and infrastructure optimization. But is there anything in your work that, in your opinion, should stay manual? Where do you think automation is unnecessary — or maybe even harmful?
Yes, I believe it’s crucial that some aspects of my job remain manual. For instance, incident analysis and root cause exploration should involve humans. While automation can detect anomalies and generate alerts, understanding the context — especially in multi-system environments — still requires human intuition.
Onboarding new services or designing system architectures also benefits from manual, collaborative planning. Relying too heavily on automation in these areas can lead to poorly designed systems being scaled too early.
In some cases, automation can even be counterproductive — for example, auto-restarting failing pods without addressing the root issue, which can mask symptoms until they escalate into full outages. In other words, it’s all about maintaining balance: automate repeatable, low-risk tasks, but leave critical or nuanced tasks to human judgment.