An Area-Efficient On-Chip Memory System for Massive MIMO Using Channel Data Compression

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskrift

Standard

Harvard

APA

CBE

MLA

Vancouver

Author

RIS

TY - JOUR

T1 - An Area-Efficient On-Chip Memory System for Massive MIMO Using Channel Data Compression

AU - Liu, Yangxurui

AU - Liu, Liang

AU - Edfors, Ove

AU - Öwall, Viktor

PY - 2018

Y1 - 2018

N2 - Massive multiple-input-multiple-output has proven to deliver improvements in both spectral and transmitted energy efficiency. However, these improvements come at the cost of critical design challenges for the hardware implementation due to the huge amount of data that has to be processed immediately, especially the storage of large channel state information (CSI) matrices. This paper presents an on-chip memory system equipped with CSI which provides high area efficiency, while supporting flexible accesses and high bandwidths. Optimization across system-algorithm-hardware is used to develop hardware-friendly compression algorithms exploring propagation characteristics and large antenna-array features. More specifically, group-based and spatial-angular algorithms are implemented in a heterogeneous memory system, which consists of an unified memory for storing compressed CSI and a parallel memory for flexible access. Up to 75% memory can be saved for a 128-antenna system, at a less than 0.8,dB performance loss. Implemented in ST 28,nm FD-SOI technology, the capacity of designed system is 1.06,Mb, which is equivalent to 4,Mb uncompressed memory and can store 100 128x10 channel matrices. The area is 0.47 mm², demonstrating a 58% reduction compared with a memory system without CSI compression. With a supply voltage of 1.0,V, the memory system can run at 833, MHz, providing a 833,Gb/s access bandwidth.

AB - Massive multiple-input-multiple-output has proven to deliver improvements in both spectral and transmitted energy efficiency. However, these improvements come at the cost of critical design challenges for the hardware implementation due to the huge amount of data that has to be processed immediately, especially the storage of large channel state information (CSI) matrices. This paper presents an on-chip memory system equipped with CSI which provides high area efficiency, while supporting flexible accesses and high bandwidths. Optimization across system-algorithm-hardware is used to develop hardware-friendly compression algorithms exploring propagation characteristics and large antenna-array features. More specifically, group-based and spatial-angular algorithms are implemented in a heterogeneous memory system, which consists of an unified memory for storing compressed CSI and a parallel memory for flexible access. Up to 75% memory can be saved for a 128-antenna system, at a less than 0.8,dB performance loss. Implemented in ST 28,nm FD-SOI technology, the capacity of designed system is 1.06,Mb, which is equivalent to 4,Mb uncompressed memory and can store 100 128x10 channel matrices. The area is 0.47 mm², demonstrating a 58% reduction compared with a memory system without CSI compression. With a supply voltage of 1.0,V, the memory system can run at 833, MHz, providing a 833,Gb/s access bandwidth.

U2 - 10.1109/TCSI.2018.2859361

DO - 10.1109/TCSI.2018.2859361

M3 - Article

JO - IEEE Transactions on Circuits and Systems I: Regular Papers

JF - IEEE Transactions on Circuits and Systems I: Regular Papers

SN - 1549-8328

ER -