In Internet network infrastructure data transport using TCP/IP or RTP is implemented at layers 2 & 1 with egress packet conversion and VCSELs. Packet processors for ingress (“header shreaders”) manage flow control for QoS (and provide a good place to foil net neutrality).
There are no “buses” between Internet routers – just optical fiber carrying wavelenth-dense multiplexing at bandwidths exceeding 1Tbps. The same trend will come to dominate in chip-to-chip communications.
The IC industry has embraced the OSI model used select chip to chip data classes. For example: PCIexpress, originally created to move data between the CPU and the GPU, has evolved with Gen3 to become a standard mass storage interface. For mobile applications, JEDEC UFS provides an interface to NAND Flash storage. And the new MIPI LLI standard provides low latency, low power, chip-to-chip interfaces.
This is a death knell for high pin count packages. Using buses to achieve bandwidth between ICs in a system will evolve to specialized high-speed Ser/Des IO.
Specialized IP that interfaces to internal standard SoC buses, like AXI, AHB, OCP, etc. will packetize, serialize/deserialize, and de-packetize data using increasingly higher transport speeds while reducing IC power and area. Ultimately this function will be virtually absorbed into the I/O pad. The reduction in pin count will simplify package and PCB routing and signal integrity, yield higher system performance and low cost.
What is MIPI LLI?
The primary objective of LLI is to provide low latency interface for internal or external devices (e.g. DRAM) between two ICs. The bandwidth is scalable from 2.9 Gb/sec over one differential signal pair, called a lane, to 17 Gb/sec over 6 lanes – in each direction. Differential serial data is driven and received by Type 1 M-PHY’s defined by the PHY working group of the MIPI Alliance. Each data lane has an M-PHY at both ends. The analog PHY’s are managed by an LLI controller on their respective ICs, and those controllers interface to the rest of the IC.
LLI ensures Quality of Service for data transfers between the two chips. Critical data transfers between the devices of the two chips are allocated higher priority as members of the Low Latency traffic class. There is a critical need that must be filled for QoS requirements, and that is to minimize data loss. To this end, the LLI controller is a layered structure that conforms to the familiar OSI model.
When behaving as a transmitter, the LLI controller, regardless of whether it resides on the system master or slave, grabs the transactions from the local interconnect into the Interconnect Adaptation Layer, frames it as command or data frames in the Transaction Layer, and adds a channel ID and implements a credit based flow control protocol in the Data Link Layer. The frames are now passed to the PHY Adapter layer that adds a frame sequence number and CRC checksum, optionally encodes the complete frame to a set of 10 bit symbols, and distributes those symbols to the available or active M-PHY’s. The M-PHY’s transmit to their counterparts on the other chip as a pair of differential signals.
On the receiver side, the reverse happens. The M-PHY and LLI combination convert the 10 bit symbols to bytes, using the channel ID’s to reassemble them into frames, which are then sent up the LLI stack for proper handling, like error detection and correction, contention detection and recovery, and generating back pressure to implement flow control. All error handling is managed between the two sets of LLI controllers and M-PHY’s.
A device on an interconnect may initiate a transaction or respond to a request or both. It may be serviced by either a low latency or best effort traffic class, or both, depending on the nature of the transaction. For these reasons, the LLI controller, which is another device on the interconnect, should be able to handle any and all of these scenarios.
Written by Sam Beal