In meta-analysis, the random-effects models are standard tools to address between-study heterogeneity in evidence synthesis analyses. For the random-effects distribution models, the normal distribution model has been adopted in most systematic reviews due to its computational and conceptual simplicity. However, the restrictive model assumption might have serious influences on the overall conclusions in practices. In this article, we first provide two examples of real-world evidence that clearly show that the normal distribution assumption is unsuitable. To address the model restriction problem, we propose alternative flexible random-effects models that can flexibly regulate skewness, kurtosis and tailweight: skew normal distribution, skew t-distribution, asymmetric Subbotin distribution, Jones-Faddy distribution, and sinh-arcsinh distribution. We also developed a R package, flexmeta, that can easily perform these methods. Using the flexible random-effects distribution models, the results of the two meta-analyses were markedly altered, potentially influencing the overall conclusions of these systematic reviews. The flexible methods and computational tools can provide more precise evidence, and these methods would be recommended at least as sensitivity analysis tools to assess the influence of the normal distribution assumption of the random-effects model.