To begin to answer this question, we must first unpack the concepts in their current context.
The NEF makes no prediction about how error is propagated in the brain. It explains how to do computation using vectors in spiking neural networks. Also, it defines how error signals can be used to change how the signal is encoded (take in) and decoded (sent out) from the neural population, as well as perform probabilistic computations. For further information on this, see PES, BCM and VOja, as well as Chapter 9 of the NEF textbook.
The SPA also makes no claims how error is propagated throughout the brain. It only claims that compression must occur for scalable and grounded knowledge representation and defines some operations for accomplishing this.
Predictive coding appears to be (based off the paper you linked) about defining how the error is propagating. Specifically, errors go up with higher level impressions adjusting how errors go up the hierarchy based on predictions/impressions.
As hinted at previously, this is somewhat separate from the NEF/SPA, however this does not imply they can't be integrated, nor does it mean that there doesn't already exist some models that fulfill the predictive coding criteria other than simply unifying perception and action by using similar representations (I think the DeWolf's REACH control hierarchy might fall under this category). It just means that as of now, there have been no explicit relations between the two modelling philosophies. To create a stronger conclusion, I would need to understand the implementation details of Predictive Coding to see if they are biologically plausible or if they can't be formulated as vector based operations within the SPA. This seems to be the position that my colleagues share (they actually have an comment article appended to the one you linked called "God, the devil, and the details: Fleshing out the
predictive processing framework") which is that more work needs to be done before a position can be taken.