I've given this some thought, and I think it's time I wrap this thread up, at least for myself.
This post was about prior work on a topic. I got a few references to follow up on. Thank you to those who contributed. I think there's a fun quote about an hour of research being worth a month trying to figure it out yourself.
I also now have some authoritative sources for how you can't just butt a Raspberry Pi's PCIe port up to another PCIe bus and expect magic to happen. For the record, yes, I know.
The reasons go beyond the RPi's root complex, and even beyond PCIe entirely. Peripheral buses are meant to connect hosts and their peripherals, and that's a directional relationship, even when the bus itself is fully peer-to-peer. Two hosts sharing the same peripheral bus, of any protocol: PCIe, USB, ISA, I2C, etc, would never know the other was there, because host software doesn't to respond to bus probes. It's not a use case for hosts. Hosts don't raise interrupt requests either. Even when the bus design and controller hardware fully support it, two hosts and no peripherals would do absolutely nothing.
Don't know about the other busses but I2C supports more than one master. I've no idea if they can communicate directly though.
But there's nothing at all stopping a device that is a peripheral to one host, from also being a peripheral to another host on another bus. If you think about it, that's what a GPIO peripheral is; a very flexible peripheral, that can implement a secondary bus to communicate with other devices.
There were a few reasons why I didn't immediately and directly address the mistaken assumption that I was just going to butt two PCIe ports together. For one, it's not my duty to assume what are your assumptions, and then correct them. You showed up at my party and pissed in the punchbowl. I don't owe you anything.
You held your party in a public park and your punchbowl was shaped like a urinal. What did you expect?
And, arguably, it is you duty to correct our assumptions. By not doing so you work against yourself. We can only see what you've written and we know nothing about you.
Another reason is that right out of the gate, I asked for explanation, which is a golden chance to state and examine one's assumptions. But I only got broad, unsupported conclusions, claims of authority and a lot of snark. I've fed the trolls enough as it is. Shame on you who call yourselves professional engineers and gave abuse instead of checking your own assumptions.
I admit that I've let the rhetoric on this thread and the other affect me too personally. While the conduct in question has ranged from well-meant but off-point, to completely unprofessional and deserving of reprimand, none of it should affect me so personally. This thread wasn't supposed to be about them or what they're saying, and it's my mistake to let it become so.
I think I may get in touch with the PiKVM people. They've done some very impressive work in the area of virtual hardware and control plane for ordinary PCs. They might be interested in the same things as me. I think we even talked about it a few years ago, but I can't remember. Might have been someone else.
Last I looked they didn't use PCIe. They used an HDMI to CSI bridge for video and USB device mode (or was it a ucontroller?) for keyboard. That might have changed though. Oh and GPIO for direct control of power and rest buttons (nothing difficult in that I did it years before they did. The GPIO part that is.)
I might see about iterative prototyping for the Pi-on-card idea. I'd probably start simply with a Pi on bus power and physically mounted to an internal card.
That's not going to teach you much other than how to get 5V out of the PCIe connector. Bosst up from 3.3v (max 9W) or buck down from 12v (max 6W without software configuration). source
Then iterate to the hard UART to RPi UART feature.
Which will teach you how to interface a UART to a PCIe bus assuming you don't use a reference design. You might as well just connect via a USB to TTL UART and focus on the software side.
Then I supposed I'd capture some PCIe traffic and see what I can learn. I've got a logic analyzer, but it can't get anywhere close to 2.5 GT/sec for PCIe 1.0. Shift registers might help, but I could also get a much faster DAQ card and avoid some unnecessary variables.
Wouldn't it be better to read the PCIe documentation first? That should tell you everything before you go anywhere near the hardware.
I guess next would be to find some I/O hardware that can operate at line, try some handshakes and maybe bit-bang a protocol or two. UARTs are probably good candidates for emulation. They do something interesting, but should have minimal bandwidth and latency requirements.
Of course, the point of iterative prototyping is that you learn a lot along the way, so I doubt I'll get to bit-banging protocols without making a few hard left turns. Or for the Brits in the house, a few hard right turns, whichever is the awkward direction where you live.*cough* thunderbolt *cough*
Research how external thunderbolt PCIe enclosures (e.g external GPU) work and what's involved to accept the packetised PCIe data received and decoded it. Instead of passing it to a physical PCIe endpoint pass it to a virtual one.
You don't need to patronize me
I wasn't. At least not intentionally. My point was that you're trying to reinvent the wheel so should learn from the original.
, but to your point: you're on to something. In fact, EC2 Mac instances work in exactly that way. In a datacenter somewhere near Herndon, Virginia, there are literally racks and racks of Mac minis, each with a Thunderbolt cable into their own EC2 Nitro cards. The Nitro cards take the PCIe data packetized by the Thunderbolt subsystem, say thank you very much, and execute the requests against AWS services. The transactions to the virtual Ethernet card get tunneled out to VPC, which is the software defined network for customer-domain data. Transactions with virtual NVMe devices get executed against Elastic Block Storage, or perhaps a more-local cache to reduce network load. I heard that different use cases are implemented differently in that regard.
I'd like to do a similar concept to the Nitro card, but with much more accessible hardware. Why? Just because it interests me. I'm curious. I find delight in exploring interesting things, doing something myself just because I want to really understand how it's done and see if I can do it too. I don't need any more reason than that.
I'd love to make a module that you can just drop into a common PC, or a thousand of them, and have bare-metal access to the systems, but entirely through software. No special BIOS/UEFI support needed. No need for a proprietary, closed-source BMC. The PC is none the wiser. It just thinks it has ordinary hardware.
For a significant number of device classes I can do that now. Over USB. Mass storage, CDC ethernet, RNDIS ethernet, UART, HID, camera to name a few.
I guess I just like the fantasy of hooking a bazillion PCs into pods, like people in the Matrix, and having them be my playthings. I dunno. Just feels so cool to me.
That's not a good analogy. Humans in the matrix were batteries hooked up to a VR system and life support not components of the matrix itself. Their only purpose was to generate power.
As I said above, in many ways you're reinventing the wheel and unless you need the performance of PCIe doing so in an overly complex (therefore overly expensive) way.
Statistics: Posted by thagrol — Thu Jul 03, 2025 1:48 pm