
Want to move beyond chatbots and build AI agents that actually perform tasks? In this demo, discover how to build Agentic AI applications using MCP, Red Hat AI and open standards.
Connecting LLMs to enterprise tools often requires complex custom code for every API. Cedric Clyburn, a Red Hat AI open source expert, demonstrates a more efficient approach. Learn how to use the Model Context Protocol (MCP) to standardize how AI connects to your data, allowing agents to diagnose Kubernetes issues and report them directly to Slack without proprietary lock-in.
In this video, we cover:
– Model Selection: Using optimized open models like Llama 3.3 from Red Hat AI.
– Standardization: How Model Context Protocol (MCP) replaces custom API integrations.
– Live Demo: Using the Goose MCP client to act as a bridge between the LLM and enterprise tools.
– Workflow: An autonomous agent diagnosing a cluster and notifying the DevOps team.
Timestamps:
00:00 Intro to Agentic AI capabilities
00:47 Using open models from Red Hat AI
01:43 Connect models to APIs and services with MCP
02:18 Setting up the Goose MCP client
03:03 Agent Demo: Diagnosing Kubernetes
03:40 Agent Demo: Sending findings to Slack
04:20 Additional resources
Explore more Red Hat AI topics:
🧠 Learn more about Agentic AI → https://www.redhat.com/en/products/ai/agentic-ai
⚙️ Get started with AI inference → https://www.redhat.com/en/engage/get-started-with-ai-inference-ebook
📰 Learn more about Red Hat AI partners → https://www.redhat.com/en/blog/how-red-hat-partners-are-powering-next-wave-enterprise-ai
#AgenticAI #RedHatAI #MCP #Kubernetes #OpenSource #DevOps











