RSRP is the linear average over the power contributions of the resource elements that carry reference/synchronization signals.
The resource elements have different bandwidths depending on the numerology (SCS), though.
Let’s assume the gNB will send constant downlink power across the bandwidth, and will now vary the SCS. There will be less REs in each symbol duration for a wider SCS (higher numerology). So RSRP measured for higher numerology will also be higher, even though the power transmitted across the BW remained the same.
For example, considering a 10 MHz channel, where the gNB has a max value of power it can send across the total bandwidth (or max power per EPRE).
If SCS is 15 kHz, there will be 52 RBs → 624 RES.
If SCS is 30 kHz, there will be 24 RBs → 288 REs.
If SCS is 60 kHz, there will be 11 RBs → 132 REs.
Does this mean that RSRP for SCS 60 kHz will indicate you 6 dB higher than the same power level for SCS 15 kHz?
For reference sensitivity testing for example, TS 38.101-1 Table 7.3.2-1a will tell you the total power accross all bandwidth. If I want to know the RSRP, even though total power is approximately the same for those different SCSs, the RSRP will be quite different.
Is this a correct interpretation? Does it then mean that RSRP value can only be considered a good indication of NW coverage when we know the SCS? This implies that RSRP -110 dBm for C-band, typically deployed in SCS 30 kHz, is actually equivalent to -113 dBm for an LTE band.