We put forward a new family of computational assumptions, the Kernel Matrix Di e- Hellman Assumption. Given some matrix A sampled from some distribution D `;k , the kernel as- sumption says that it is hard to nd \in the exponent
We put forward a new family of computational assumptions, the Kernel Matrix Die- Hellman Assumption. Given some matrix A sampled from some distribution D `;k , the kernel as- sumption says that it is hard to nd \in the exponent" a nonzero vector in the kernel of A > . T...
We put forward a new family of computational assumptions, the Kernel Matrix Di e- Hellman Assumption. Given some matrix A sampled from some distribution D `;k , the kernel as- sumption says that it is hard to nd \in the exponent
We put forward a new family of computational assumptions, the Kernel Matrix Die- Hellman Assumption. Given some matrix A sampled from some distribution D `;k , the kernel as- sumption says that it is hard to nd \in the exponent" a nonzero vector in the kernel of A > . This family is the natural computational analogue of the Matrix Decisional Die-Hellman Assumption (MDDH), proposed by Escala et al . As such it allows to extend the advantages of their algebraic framework to computational assumptions. The k -Decisional Linear Assumption is an example of a family of decisional assumptions of strictly increasing hardness when k grows. We show that for any such family the corresponding Kernel Assumption family is also a strictly increasingly weaker family of computational assumptions. This requires ruling out the existence of some black-box reductions between exible problems ( i.e. , com- putational problems with a non unique solution)
Citation
Morillo, M., Rafols, C., Villar, J. "Matrix computational assumptions in multilinear groups". 2015.