Suggestions(5)
Exact(7)
Wren published a machine learning method trained on the chemical ChemID database and used it to find chemical entity mentions in PubMed abstracts.
The baselines evaluated in our experiments are matrix factorization trained on continuous data (referred to as MF), and the KronRLS method trained on both continuous (referred to as Continuous KronRLS) or binarized data (referred to as Binary KronRLS).
It is based on a machine-learning method trained on sequence and sequence-derived features.
This suggests that the Naslund method trained on a small dataset may tend to under-predict VNTRs, when using the 0.5 cut-off.
The method trained on ~19,000 disease associated variants has been tested on 10,000 mutations in COSMIC database prioritized according to their recurrence and multiplicity.
Here, we develop a computational method, trained on the set of interaction data measured in Stiffler et al. (2007) to quantitatively predict PDZ domain peptide interactions involving previously unseen PDZ domains and/or peptides from their primary sequences.
Similar(53)
To investigate whether the classification error is due to data scarceness, we examined the performance of two classification methods trained on datasets with various sizes.
In what follows, we attempt to answer this question using five allele-specific blind test sets [30] to evaluate the performance of the three prediction methods trained on the unique, similarity-reduced, and weighted versions of the MHCBN data for the corresponding alleles.
There was little difference in prediction accuracy for semi-supervised methods trained on positively labeled data only, compared with training on positive and negative samples.
However, as shown by our experiments, generic dictionary matching-based methods tend to perform weakly compared with dedicated machine learning methods trained on sufficient resources, motivating the development of a dedicated tagger.
The method was trained on known human pre-miRNAs and achieved a high sensitivity (∼90%) when applied to known pre-miRNAs from human and several other organisms.
Write better and faster with AI suggestions while staying true to your unique style.
Since I tried Ludwig back in 2017, I have been constantly using it in both editing and translation. Ever since, I suggest it to my translators at ProSciEditing.
Justyna Jupowicz-Kozak
CEO of Professional Science Editing for Scientists @ prosciediting.com