[Home] [Blog] [Contact] - [Talks] [Workshop] [Bio] [Customers]
twitter linkedin youtube github rss

genAI for DevOps engineers Workshop

Why this workshop ?

A lot of courses teach genAI from the perspective of a Data Scientist. This specific course is tailored to the traditional developer, devops and devsecops engineer who wants to learn more on genAI. You don’t need to understand the deep math behind the scenes to make use of it.

We show you how to build genAI applications , explain the engineer practices you need to know and how it’s similar and different from the traditional SDLC: ex. How do you deploy and version these applications ? How do you do testing ? What does observability look like ?

About the teachers

Patrick Debois

Most known for his work in DevOps and DevSecOps , he has been active in the space of genAI for a few years now. With his extensive background he knows the traditional SDLC in depth and can translate the concepts of the genAI domain to familiar concepts of software engineering easily.

John Willis

As an accomplished author and innovative entrepreneur, John is deeply passionate about exploring and advancing the synergy between Generative AI technologies and the transformative principles of Dr. Edwards Deming.

Why attend ?

The space of genAI is moving very fast and there is a lot of noise surrounding it. While people want to learn more about it , they don’t know always know were to begin. And if they try examples and material, it might already be outdated. This can make getting started daunting.

As an engineer this course will:

As an engineering lead I’d want my engineers to take this course:

What to expect

A two day course with a mixture of theory and hands-on exercises. The following schedule will give an overview of the topics discussed. Note that it might change a bit as the field is rapidly evolving and we want to adapt to the latest industry insights.

See the more detailed schedule below.

Current available sessions

We currently have planned this workshop at the following locations:

How to register and pay

First register for the workshop using the google form below. You can provide additional information to help us tailor the workshop to your needs.

If you need an official invoice after payment please also provide these details.

Next we'll provide you with a link by email to complete the payment process.

Detailed schedule

Note: this content is indicative and will be refined as we go.

Day 1

  • LLMs and prompting (1,5h)
  • We get started with connecting to an LLM and sending it prompts.
    • llms (local , cloud) & basic prompting
    • advanced prompting: Few show prompting, CoT, ToT
    • LLM Frameworks: llamindex, langchain, …

  • Testing: (1,5h)
  • Just as in traditional software development, we need to test our application.
    • building evals
    • explore llm as a judge, vs manual testing
    • prompt refactoring and rewriting

  • RAG & Graph databases (2h)
  • We then connect our llms with our own data sources and show we can improve the results and reduce hallucinations.
    • understanding Search / Vector databases
    • explore different embedding models
    • show the importance of Chunking , Loaders strategies
    • How to improve results with Re-ranking
    • use GraphRAG as an additional knowledge source
  • genAI Infra: (0.5h)
  • Once we finished our app, we are ready to deploy it and explore the required infra to run it.
    • look at hosting options
    • explain the different nference speed factors
    • how to do model, prompt, app versioning
    • use of proxies such as llm proxy , llmlite
    • intelligent llm routing
    • introduce caching / embeddings
    • look at different costs models
  • Security & Operations: (1.5h)
  • In production we need to monitor our application and protect it from attacks.
    • metrics that matter
    • how does observability fit in
    • provide guardrails to generation
    • anticipate and protect against prompt injection

Day 2

  • AI Functions: (1h)
  • We use the output of llms as input for our code and have the llm call our code.
    • JSON format, pydantic, retries, json models
    • function calling
    • reasoning methods: react and other patterns
  • Vision Language Models: (0.5h)
  • We go beyond the input as text by using images and video as input.
    • explain multi modal models
    • usage with ui automation and generation
    • extend it to desktop automation
  • Agents: (2h)
  • While LLMS are focused on output, agents are focused on action. Agentic workflows are the future to improve llms and our workflows.
    • agent identities and hierarchies
    • frameworks overview: Langgraph, CrewAI, autogen
    • different memory strategies
    • agentic usage patterns
  • Code Assistants: (1.5h)
  • Code assistants are a mix of llms and agents. They are focused on code generation and execution.
    • how is code generation different
    • provide code execution sandboxes
    • agentic IDEs : ex. aider , cursor, gptscript
    • coding context patterns
  • Autonomous Agents: (1h)
  • How far or how close are we from full autonomous agentic software developers ?
    • explanation of SWE benchmark
    • overview SWE-agents - Devin, OpenDevin, …
    • relation with the ironies of automation

Epilogue

genAI as a technology is here to stay.

Remember, it’s not the AI that’s going to take your job - it’s the person who knows how to use AI to do your job better, faster, and cheaper.