Free cookie consent management tool by TermsFeed Blog - DevOps Journeys - Guest blog with Patrick Hyland | Linux Recruit

DevOps Journeys - Guest blog with Patrick Hyland

Guest Blog with Patrick Hyland

“AI is becoming a driving force in DevOps by automating labour-intensive tasks, predicting issues, and validating code, enabling teams to focus on strategic initiatives.”

This insight from Patrick Hyland at Domino's lays the foundation for our latest guest blog, where we delve into how AI is revolutionising the DevOps landscape.

From the rise of microservices and AIOps to integrating security into CI/CD pipelines, Patrick delves into how AI-driven automation, toil reduction, and structured governance are transforming the DevOps landscape.

DevOps has transformed the way organisations build, deploy, and manage software. In your experience, how has the DevOps landscape evolved in recent years, and what do you see as the most significant shifts driving its growth and adoption today?

At the companies I worked with in the late noughties and early 2010s developers wrote code, ran and tuned their build systems. The CI / CD pipeline was an emergent technology, not yet mainstream. These companies had people called ‘DevOps Engineers’ who would typically do deployments of for example java package files using early configuration management and deployment technologies such as CFEngine, Puppet and Chef. A lot changed with the advent of micro-services and containerisation and the job title ‘DevOps Engineer’ became enigmatic and debated through much of the 2010s, although I always thought of it as someone who set up the CI / CD pipeline toolset and worked with teams to make things run end to end. The practice landscape has evolved further in recent years with DevSecOps, GitOps, API Gateway and Service Mesh and AIOps all being prominent themes. Very prominent is AI augmentation of routine development activities for toil reduction. If one steps away from the DevOps label, this evolution is pure ongoing technological innovation and the drivers for adoption are ultimately the desire to retain a competitive operations edge relative to others in bringing products and services to market.

With the growing adoption of AI and machine learning in DevOps, how do you see AIOps transforming traditional practices, and how are you incorporating these technologies into your processes?

If you look at DevOps as a value stream that moves software features into production there are a number of emergent AI technologies that amplify the effectiveness of activities traditionally undertaken as hard graft by developers when operating their CI/CD pipelines. Some examples are AI to handle PRs, auto-documenting them, analysing them, suggesting refactoring actions, suggesting test coverage, scanning for security vulnerabilities in the code and so on. These are very effective toil reduction technologies and well worth incorporating as they will become standard soon and organisations not adopting them will become inefficient and fall behind. What is perhaps more innovative and interesting and with the potential for competitive advantage is the concept of building an expert system perhaps with something like GraphRAG (Retrieval Augmented Generation) AI that might present itself as a holistically trained capability specialist.

Think of an engineer asking an SRE capable AI questions about how to retrospectively engineer reliability into the integration circuits of a problematic subsystem within a large, distributed microservice estate. The AI knows exactly what’s there for that implementation, how it’s configured, how it’s performing, and it proceeds to suggest sensible service mesh configurations, monitoring approaches, and deployment methods tailored to that subsystem's context. Using run data, the AI evaluates the efficacy of the changes some weeks later, gradually assisting the engineer in converging on a highly reliable operational state.

With the increasing emphasis on developer productivity and autonomous teams, how do you balance self-service models with governance and security requirements?

I have recently been involved in managing across engineering on a multi year programme of work with many development teams working to modernise a legacy e-commerce platform into a microservice estate using a strangler architectural pattern, so I'll talk in that context. My view is that when you commence with something big like that you need to start with defining a software engineering capability model that shows the routine activities that the backend developers will be performing and base these activities on a gated ‘route to live’ value stream. Front load that stream with architectural activities and the developers involvement with them, for example their Architectural Review Board / Technical Design Authority inputs that culminate in a low level design. Once you have the value stream defined, create a low level design for a sample application and implement that with hooks into all those activity gates (eg, threat model, reliability analysis, build, test, perf, deploy, monitor) and make that available to all the teams as a reference. Then start to enforce those gates with the tech leads of the teams at a weekly meeting. If this seems control heavy, it’s needed initially to scale a big programme of work and establish an engineering governance structure. Once that’s running, and working, you can loosen it up a bit and teams can start experimenting but I suggest using something like dynamic capabilities innovation theory (sense, seize, transform) to transfer new tech and approaches into that baselined software engineering capability that you started with and creating new versions of the sample app that demonstrate the value and efficacy of the new techniques. An innovation management approach is preferable to going completely wild west, as you may otherwise find yourself in a tough spot.

As the need for continuous compliance grows, how are companies integrating security into their CI/CD pipelines, and what strategies are they using to ensure compliance in dynamic cloud environments?

I’m a proponent of getting things right at the start but designing secure systems from the beginning is often a challenge on fast paced programmes of work. We are looking at transferring in modern threat modelling tools into our DevSecOps capability that use a component diagram approach to generate a contextually relevant threat landscape and which create a backlog of countermeasures right at the architectural design stage so that developers can go in eyes wide open on security.

We are also looking at shifting left the routine security scanning technologies eg: static code analysis, container scanning, so that they run at the CI gate to get compliance as the software is being built rather than at some later point in the CI / CD pipeline.

In the era of GitOps and Infrastructure as Code (IaC), how do you ensure best practices for version control, automation, and consistency across environments?

We use the benchmark practice of a Kubernetes GitOps platform repository and define common base templates and then environments by folder. The templates in the environments folder apply transformations to the base templates to give the environment the correct configurations for its locality and purpose. Promotion of change through the environment pipeline on the route to live is done via edits to these yaml files on a feature branch which is then merged into the main platform repo after a PR which is a sense check on the action. We use an Argo CD controller to compare the environment folder for differences with the Kubernetes run state and to sync the run state so that it matches the declarative yaml configurations in the environment folder. Everything is in Git, so versioned, the automation is via Argo and comparison of environment configurations can be done by a differential check of the environment folders.

What are some of the most common reasons DevOps initiatives fail, and what lessons have you learned from these failures that could help others avoid similar pitfalls?

A common reason why a DevOps initiative may fail is that the engineering initiative has not been strategically connected to an external product or service innovation. This can lead to a lack of financial commitment in getting the DevOps foundation firmly in place because the value of it is not as readily grasped as the external innovation. Sometimes the best time to undertake a DevOps initiative is when you are doing innovation on a customer facing service that is perceived to be of high value and the DevOps practices can be demonstrated to facilitate the delivery and stability of that external facing innovation. A technology roadmap that shows how the two relate is a useful technique.

Looking ahead, how do you envision the future of DevOps evolving over the next few years? What emerging technologies, practices, or cultural shifts do you believe will have the biggest impact on the way we approach DevOps?

AI will reduce toil in the development and operations processes. Cybersecurity is becoming even more of a concern than it was in the past, so DevSecOps innovations will become increasingly important. SRE remains very topical, and particularly with the widespread adoption of cloud native API gateway and service mesh technologies, it has become easier to engineer reliable applications and to configure canary and other effective deployment strategies.

If you could automate one aspect of your job or daily routine - no matter how mundane or outrageous - what would it be and why?

On the personal side, and simply for the time savings, I would love to automate the planning of holidays or weekend adventures based on my interests with a couple of options regularly shown as suggestions for me to choose from. More job related, I see value in automating the collection of transcripts of non private work related conversations that my engineering teams have each day with the AI distilling these very concisely into the essential matters discussed and sending to me as a report at the end of each day.

To hear Patrick's insights compared with 7 other industry leaders, download DevOps Journeys 4.0 today. DevOps Journeys provides a roadmap to navigate evolving challenges and stay ahead of the curve.

Whether you’re advancing your DevOps skills or initiating digital transformation, this resource is invaluable for every DevOps enthusiast.

Latest opportunities

Platform Engineer (Newcastle)

Permanent | £70,000 - £90,000 | North East

Ready to flex your tech skills and improve public services?

View job

Platform Engineer

Permanent | £60,000 - £80,000 | North East

Join a team with the mission to power organisations by building and maintaining high-availability systems at scale.

View job

Product Engineer (1-2 years)

Permanent | £60,000 - £90,000 | London

Start-ups are like an adventure: fast-paced, exciting, and full of opportunities to grow.

View job
View all jobs
  • Register

    Whatever role you are looking for, our team will work with you to understand your unique skills, experience, career goals and aspirations. Kick-start your job search by registering with us today.
    Register now
  • Get in touch

    Searching for new talent? Let’s go. Get in touch with us today to find out how we can help scale your team.
    Get in touch