uefi: support PCIe in UEFI #3268
Conversation
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds initial PCIe NVMe UEFI boot support (WIP) by wiring PCIe NVMe devices into the Petri/OpenVMM configuration path and passing PCIe BAR aperture metadata to the UEFI config blob, plus an end-to-end test.
Changes:
- Add a multi-arch UEFI test that boots Alpine from an NVMe device attached via an emulated PCIe root port.
- Extend UEFI config blob definitions and loader to provide per-host-bridge PCIe BAR apertures.
- Extend Petri VM config/build path to attach NVMe controllers as PCIe devices for
BootDeviceType::PcieNvme.
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| vmm_tests/vmm_tests/tests/tests/multiarch/pcie.rs | Adds a UEFI PCIe-NVMe boot test and basic guest visibility validation. |
| vm/loader/src/uefi/config.rs | Introduces a config-blob structure for PCIe BAR apertures and a new blob type. |
| petri/src/vm/openvmm/construct.rs | Converts Petri “PCIe NVMe drives” into OpenVMM PcieDeviceConfig entries. |
| petri/src/vm/mod.rs | Adds pcie_nvme_drives to PetriVmConfig and a new BootDeviceType::PcieNvme. |
| openvmm/openvmm_core/src/worker/vm_loaders/uefi.rs | Emits PCIe BAR aperture entries into the UEFI config blob when host bridges exist. |
| .gitignore | Removes an extra blank/whitespace-only ignore line. |
Add support for hardware emulated PCIe buses. This allows booting from NVMe exposed on a PCIe. The host is required to specify which bridges it wants UEFI to enumerate by creating a aperture config structure for each bridge, to avoid requiring UEFI to parse the SSDT. ECAM ranges are parsed from the MCFG table. See the corresponding openvmm change: microsoft/openvmm#3268
Add support for booting from NVMe devices attached to PCIe root ports (ePCI) in the petri test framework: - Add pcie_nvme_drives field to PetriVmConfig for PCIe NVMe drive configs - Add PcieNvme variant to BootDeviceType enum - Translate PCIe NVMe drives to openvmm PcieDeviceConfig in construct.rs - Add pcie_nvme_boot VMM integration test that validates the full PciHostBridgeDxe -> PciBusDxe -> NvmExpressDxe -> OS boot chain Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Root cause: UEFI synthvid allocated VRAM at the ECAM base address (0xE4000000) because PlatformPei declared the ECAM range as MMIO but did not reserve it in GCD. The fix reserves ECAM ranges in PciHostBridgeLib so AllocateMemorySpace cannot hand them out. Changes: - pcie_nvme_boot test: disable VPCI boot to reduce noise - Investigation doc: updated with full root cause analysis - MSVM-epci-debug-x64.fd: pre-built firmware with all PCIe fixes Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Updated MSVM-epci-debug-x64.fd with clean build (only ECAM reservation fix, no diagnostic tracing or double ConnectAll). Updated investigation doc: double ConnectAll confirmed unnecessary, NvmExpress DevicePath leak is a pre-existing bug but not the blocker. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Renamed epci-support-plan.md to pcie-support-plan.md and replaced all ePCI references with PCIe throughout planning and investigation docs. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
| BootDeviceType::NvmeViaNvme => todo!(), | ||
| BootDeviceType::PcieNvme => { | ||
| self.config.pcie_nvme_drives.push(PcieNvmeDrive { | ||
| port_name: "s0rc0rp0".into(), |
There was a problem hiding this comment.
(Maybe it's OK for now, I think we need to think a little harder about the "default" PCIe topology, at least within petri)
| BootDeviceType::PcieNvme => { | ||
| self.config.pcie_nvme_drives.push(PcieNvmeDrive { | ||
| port_name: "s0rc0rp0".into(), | ||
| nsid: 1, | ||
| drive: boot_drive, | ||
| }); | ||
| self | ||
| } |
There was a problem hiding this comment.
BootDeviceType::PcieNvme hard-codes the target port name ("s0rc0rp0") and assumes a PCIe root topology has been created elsewhere. If the caller forgets to add with_pcie_root_topology(...) (or otherwise create that port), OpenVMM will later fail when wiring the PCIe device to a non-existent port, and the resulting error will be far from the boot-device selection site. Consider either (a) making the port name configurable as part of this boot device type, or (b) adding an explicit early validation / clearer error when PcieNvme is selected but no matching port will exist.
Add PCIe UEFI support, and a petri test.
UEFI changes here: microsoft/mu_msvm#65