🔒 You must be logged in as an Administrator or Editor to listen to this audio.
The CrestApps.OrchardCore.AI module is the foundational part of the CrestApps AI Suite for the Orchard Core CMS. It is designed to act as a bridge between your Orchard Core application and modern Large Language Models (LLMs), allowing you to bring intelligence directly into your content management ecosystem.
Because we cannot browse every single C# file interactively, I will guide you through the architectural design, core concepts, and how the codebase is structured so you can read and understand the source code more effectively.
1. The Core Architecture (What does this code do?)
At its heart, this module provides a model-agnostic infrastructure. Instead of hardcoding everything to just OpenAI, the code is designed to abstract the AI provider (Azure OpenAI, DeepSeek, Ollama, etc.) so that developers can plug-and-play different models.
If you are exploring the code in the v2.0.0-preview-0005 branch, here are the main responsibilities this module handles:
- Connection & Deployment Management: The code handles secure storage of API keys, endpoints, and deployment models.
- AI Profiles: It allows administrators to configure different "Profiles" (e.g., a "Blog Summarizer Profile" vs. a "Support Chatbot Profile"), each with distinct system prompts, token limits, and connected models.
- Tool Registration: It provides the core interfaces (like
IExternalToolorIAITool) that allow Orchard Core features to register themselves as executable functions that an LLM can invoke.
2. Guide to Reading the Sub-Modules
The CrestApps.OrchardCore.AI namespace acts as the trunk of a much larger tree. When reading the repository, you will notice it branches out into several specialized sub-modules. Here is how to navigate them:
CrestApps.OrchardCore.AI(The Core Base)- What to look for: Look for
Startup.csfor dependency injection. Check theServices/folder for the abstraction layers that communicate with the LLM SDKs. Look at theModels/folder for how AI settings and profiles are stored in the database.
- What to look for: Look for
CrestApps.OrchardCore.AI.Mcp(Model Context Protocol)- What it is: This is one of the most powerful features. It implements an MCP Server. MCP is an open standard that allows LLMs and AI Agents (like Claude Desktop or custom agents) to securely discover and execute tools inside your system.
- What to look for: Look for the routing and authentication logic. The server uses Server-Sent Events (SSE) and OpenID/API Keys to validate requests, translate the MCP protocol into Orchard Core API calls, and return results.
CrestApps.OrchardCore.AI.Agent& Skills- What it is: This code registers Orchard Core's internal capabilities (managing content items, tenants, features, and workflows) as "Skills."
- What to look for: You'll find classes decorated with tool attributes or implementing skill interfaces. This is where the code describes to the LLM how to use Orchard Core (e.g., "Use this tool to publish a blog post").
CrestApps.OrchardCore.AI.DataSources& Memory- What it is: This handles RAG (Retrieval-Augmented Generation). It allows the system to chunk documents (like PDFs), generate vector embeddings, and store them in databases like Azure AI Search or Elasticsearch.
- What to look for: Look for the embedding generation pipelines and the search integration services.
3. How to Approach the Codebase (Step-by-Step)
If you are opening this repository in Visual Studio or VS Code, follow this reading path to understand the flow of data:
- Start at
Startup.cs: Every Orchard Core module starts here. Look atConfigureServices. This will tell you exactly which services are registered as Singletons, Scoped, or Transient. It will also show you the feature flags. - Examine the Abstractions (
Interfaces/orAbstractions/): Before looking at the implementation, look at the interfaces. You will likely see interfaces likeIAIService,IAIProfileManager, orIToolRegistry. This shows you the "contract" the module promises to fulfill. - Trace a Request:
Imagine a user sends a chat message.
- Find the Controller (e.g.,
ChatControlleror an API endpoint). - See how the controller fetches the configured
AI Profile. - Trace how the message is handed off to the
IAIService. - See how Orchard Core plugins (tools/skills) are attached to the LLM request.
- Find the Controller (e.g.,
- Look at the UI Integration (
Views/andDrivers/): Because Orchard Core is a CMS, there are UI components for the admin panel. Look at theDrivers/folder (Display Drivers in Orchard Core handle how settings and content are rendered in the admin dashboard) to see how an admin configures an AI provider.
4. Key Concepts to Understand
To fully grasp this code, you should be familiar with a few Orchard Core and AI concepts:
- Orchard Core Modularity: Everything is a feature. The AI suite doesn't force you to use everything. You can enable just the base module, or add Chat, MCP, and RAG independently.
- Function Calling / Tool Use: The codebase heavily relies on giving LLMs "Tools." When you look at the code, pay attention to how Orchard Core schemas are mapped to JSON schemas so the LLM knows what arguments to pass when triggering an Orchard Core workflow.
Next Steps for You:
I recommend cloning the repository locally, running dotnet build, and launching the Orchard Core admin panel. Enabling the "Model Context Protocol (MCP) Server" and the "AI Agent" features in the admin dashboard, and then stepping through the code with a debugger when you send a prompt, will be the fastest way to see this architecture in action.