In many applied sciences, the main aim is to learn the parameters of parametric operators which best fit the observed data. Raissi et al. (J Comput Phys 348(1):683–693, 2017) provide an innovative method to resolve such problems by employing Gaussian process (GP) within a Bayesian framework. In this methodology, GP priors are modified based on the particular form of operators and are employed to infer hyperparameters by the maximum likelihood estimation (MLE). However, MLE-based and Bayesian inference usually involves setting up a covariance matrix that can be ill-conditioned. In this paper, the numerical stability of the methodology is investigated during training and prediction in two different assumptions: noisy and noise-free data. Based on the numerical simulations, for noisy data observations, by increasing noise in the data, the uncertainty in the predictions increases, and the accuracy of estimating the parameters decreases. However, the likelihood computation and model prediction will be significantly stable. While in the noise-free data observations, by increasing the number of data, the uncertainty quantification encoded in the posterior variances is significantly reduced, and the accuracy of the estimates is increased. But, the conditioning of the problem will be seriously distorted and we will face a very ill-condition problem. Therefore, in this paper, an approach is provided to manage the problem conditioning in the best possible way and consequently to improve the method performance in the noise-free data assumption using the hybrid kernels technique.