DevOps Journeys - Guest blog with Ben Stanley
Guest Blog with Ben Stanley
“DevOps practitioners must know Kubernetes thoroughly; expect this to remain a requirement for the foreseeable future."
This is just one central debate explored in our latest guest blog with Ben Stanley from cloud consultancy, Temrel.
From the essential role of Kubernetes in modern DevOps, to the debate over its complexity and whether simpler alternatives might better serve smaller teams or streamlined applications, this blog covers the key shifts reshaping the field.
Ben also dives into trends in automation, cloud-native development, and how DevOps practitioners can adapt to stay ahead in a rapidly changing landscape.
DevOps has transformed the way organisations build, deploy, and manage software. In your experience, how has the DevOps landscape evolved in recent years, and what do you see as the most significant shifts driving its growth and adoption today?
The tools available have matured a great deal in the last ten years across the board. A decade ago we were writing our own tortuous CI/CD scripts, now we can access marketplaces with thousands of solid pipelines. There has been an evolution from DevOps towards SRE and Platform Engineering, roles which better capture the dynamic between development and operations. It also means devops practitioners need to understand how platforms work from the ground up. IaC has become a baseline. No serious project is without it.
Writing and maintaining Terraform code has become a fundamental pillar of the DevOps role. Looking forward, the biggest growth area in DevOps is using AI to validate code as part of a CI/CD pipeline. This gives us the ability to assess code much more deeply prior to go-live.
With the growing adoption of AI and machine learning in DevOps, how do you see AIOps transforming traditional practices, and how are you incorporating these technologies into your processes?
CI/CD pipelines and other automated ops processes can have AI-monitoring and evaluation set up, improving the ability to detect faults and maintain code quality prior to deploy. I use a step to run new code through AI as a sanity check on all my CI/CD pipelines. Code writing has also been transformed. Whilst it's not yet advanced enough to create fully-working systems, AI's ability to generate first-pass code and help debug vastly improves speed of development. I use AI copilot tools extensively in python scripting and Terraform. Diagnosis and troubleshooting are also rendered much easier.
We've already seen most of the big vendors of monitoring solutions deploy beta AI assistants to help diagnose the root cause of errors. It's much faster than trawling through Stack Overflow…
How are you leveraging platform engineering to enhance developer experience and streamline operations? What challenges and benefits have you encountered in implementing this approach?
Seeing developers as customers of your DevOps platform boosts the working relationship, enhances productivity and improves code throughput. We use GitOps as a fundamental practice, with strong operational procedures in place to enable responsiveness to dev needs. As such, devs have virtually no infrastructure-related time overheads, allowing them to concentrate on features. It also helps us maintain strict and thorough control of infrastructure cost. Naturally, this does constrain flexibility, and can inhibit experimentation if devs need to engage Platform Engineers each time they want to try something new.
With the increasing emphasis on developer productivity and autonomous teams, how do you balance self-service models with governance and security requirements?
It's always a trade-off. There are certain red lines, notably around networking, that must always be maintained come what may. Beyond controlling traffic to, from and around your infra, much of the rest is a question of cost. We use AWS and maintain a strong system of organisations and mandatory rules. This can sometimes constrain experimentation, but an organisation that sacrifices security in favour of velocity shall have neither.
In the era of GitOps and Infrastructure as Code (IaC), how do you ensure best practices for version control, automation, and consistency across environments?
Generally, it's rudimentary to deploy multiple identical environments which are managed by environment-specific variables where necessary, in a single infrastructure-as-code repository. This approach, coupled with pipelines that deploy to all environments simultaneously (with appropriate manual controls), largely ensures that environments are homogenous and prevents nasty surprises at production go-live. The other element of GitOps is preventing manual CRUD of infrastructure by anyone, DevOps resources included. This is crucial to ensure all infrastructure is managed and deployed by code only.
What are some of the most common reasons DevOps initiatives fail, and what lessons have you learned from these failures that could help others avoid similar pitfalls?
DevOps requires buy-in and long-term commitment from management. Sticking to the lowered flexibility trade-off inherent in the DevOps approach yields long-term benefits, but sometimes not short-term ones. Transitioning from an 'individual heroes' approach to something systematised and abstracted away from manual control is a large operational and cultural shift that needs effort, especially when things inevitably break.
Indeed, it's often hard where to put the cursor in terms of control. Going from zero to a fully hands-off, automated system in one go will generally fail as users struggle to adapt and are obliged to find workarounds. This leads inevitably to a worst-of-both-worlds situation of lower engagement and productivity. A slower, more managed and responsive transition is required.
Looking ahead, how do you envision the future of DevOps evolving over the next few years? What emerging technologies, practices, or cultural shifts do you believe will have the biggest impact on the way we approach DevOps?
I expect the extensive use of AI to do work that junior DevOps resources previously would have will lead to a constriction of available roles and then, on the medium term, a constriction of available talent. Working with LLMs and other flavours of machine learning will become a must-have skillset for serious DevOps practitioners as more and more companies bring AI in-house for use on their proprietary data. Edge compute management will become more important as a skill through the expectation that virtually everything be connected to the internet. I'm really curious how DevOps will work as hardware comes back into fashion via IoT devices and robotic devices such as drones. Kubernetes will stay as the standard compute management paradigm for some time to come, though serverless may make a dent in this. DevOps practitioners must know Kubernetes thoroughly; expect this to remain a requirement for the forseeable.
If you could automate one aspect of your job or daily routine - no matter how mundane or outrageous - what would it be and why?
Sales and marketing! Finding clients is still the hardest and most labour-intensive part of the job.
To hear Ben's insights compared with 7 other industry leaders, download DevOps Journeys 4.0 today. DevOps Journeys provides a roadmap to navigate evolving challenges and stay ahead of the curve.
Whether you’re advancing your DevOps skills or initiating digital transformation, this resource is invaluable for every DevOps enthusiast.
Latest opportunities
Principal Engineer (Director-Level)
Do you have a healthy interest in exploring new roles?
Principal Software Engineer
Be a force for good in technology, join a mission focused team.
HPC Engineer
Ready to revolutionise scientific research?