In my semi-ongoing series of “things I wished someone had told me…” I wanted to share my sense of common “gotchas” in genomics. Here’s my top 10.
- No large scale dataset is perfect – and many are at some distance from perfection. This is true of everything from assemblies through to gene sets to transcription factor binding sites to phenotypes generated cohorts. For people who come from a more focused, single area of biology where in a single experiment you can have mastery of every component if desired this ends up being a bit weird – whenever you dig into something you’ll find some miscalls, errors or general weirdness. Welcome to large scale biology.
- When you do predictions, favour specificity over sensitivity. Most problems in genomics are bounded by the genome/gene set/protein products, so the totally wide “capture everything” experiment (usually called “a genome-wide approach”) has become routine. It is rarely the case (though not never) that one wants a high sensitivity set which is not genome-wide. This means for prediction methods you want to focus on specificity (ie, driving down your error rate) as long as one is generating a reasonable number of predictions (>1000 say) and, of course, cross validating your method.
- When you compare experiments you are comparing the combination of the experiment and the processing. If the processing was done in separate groups, in particular with complex scripting or filtering, expect many differences to be solely due to the processing.
- Interesting biology is confounded with Artefact (1). Interesting biology is inevitably about things which are outliers or form a separate cluster in some analysis. So are artefacts in the experimental process or bioinformatics process – everything from biases towards reference genome, to correlations of signals actually being driven by a small set of sites.
- Interesting biology is confounded with Artefacts (2). There is a subset to the above which is so common to be worth noting separately. When you have an error rate – and everything has an error rate due to point 1 – the errors are either correlated with biology classification (see point 2) or uniform. Even when they are uniform, you still get mislead because often you want to look at things which are rare – for example, homozygous stop codons in a whole genome sequencing run, or lack of orthologs between species. The fact that the biological phenomena you are looking for is rare means that you enrich for errors.
- Interesting biology is usually hard to model as a formal data structure and one has to make some compromises just to make things systematic. Should one classify the Ig locus as one gene, or many genes, or something else? Should one try to handle the creation of new selenocystine amber stop codon by a SNP as a non synonymous variant? To what extent should you model the difference between two epiptopes for a chip-seq pull down of the same factor when done in different laboratories? Trying to handle all of this “correctly” becomes such a struggle to be both systematic and precise one has to compromise at the some point, and just write down/reference papers a discussion in plain old English. Much of bioinformatics databases is trying to push the boundary between systematic knowledge and written down knowledge further; but you will always have to compromise. Biology is too diverse.
- The corollary of 1, 2 and 4 is that when most of your problems in making a large scale dataset is about modelling biological exceptions, your error rate is low enough. Until you are agonising over biological weirdness, you’ve still got to work on error rate.
- Evolution has a requirement that things work, not that it’s an elegant engineering solution. Expect jury rigged systems which can be bewildering in their complexity. My current favourite is the Platypus X chromosomal system which is just clearly a bonkers solution to hetreogametic sex. Have fun reading about it (here’s one paper to get you started)!
- Everyone should learn the basics of evolution (Trees, orthologs vs paralogs. And please, could everyone use these terms correctly!) and population genetics (Hardy Weinberg equilibrium, the impact of selection on allele frequency, and coalesence, in particular wrt to human populations). For the latter case people often need to be reminded that the fact a site is polymorphic does not mean it is not under negative selection.
- Chromosomes have been arbitrarily orientated p to q. This means that the forward and reverse strand have no special status on a reference genome. If any method gives a difference between forward and reverse strands on a genome wide scale – it’s probably a bug somewhere 🙂
I am sure other people have other comments or suggestions 😉