Archive for the ‘NIFarious Ideas’ Category

Perfect storm to fix antibody problems?

Posted on September 9th, 2016 in Anita Bandrowski, General information, NIFarious Ideas | No Comments »

This article comes from the GBSI folks, reporting on antibody validation solutions:

Well, the prefect storm has converged, and a proposal for validation of antibodies continues the year-long effort by many in the research community to fix a widely-known problem in research and reproducibility – poorly characterized antibodies. This first of many efforts to come to a head is led by Mathias Uhlen and the ad-hoc International Working Group on Antibody Validation, and outlines five conceptual pillars for antibody validation to be used in an application-specific manner. Several of the co-authors will present their paper at HUPO 2016 later this month in Taipei. Additional efforts include the 2nd International Antibody Validation Meeting on September 15th in the UK, and one by GBSI that you may have heard about. An online community has emerged to help crowdsource potential standards and will continue to drive the conversation. Still not convinced this is a big deal? Thermo Fisher Scientific has pledged to verify the specificity of their antibodies in line with these new recommendations. Pair these and other activities with Cell’s STARMethods and the future is looking brighter for research reproducibility.

A STAR is Born, Indeed

Posted on August 26th, 2016 in Anita Bandrowski, Data Spotlight, News & Events, NIFarious Ideas | No Comments »

News on the RRID front is encouraging!

Screen Shot 2016-08-26 at 4.37.49 PM

We have been very busy adding new journals over the last year. It is wonderful whenever we see a new journal with and RRID, especially when the instructions to authors are updated and you know that this is a serious effort from the editors.

More recently RRIDs are being type-set into journals by groups such as BMC, eLife (structured methods), Elsevier and Cell Press journals improving the syntax of the identifiers and allowing journals to link to databases from articles if they chose to do so.

However a step further has just been undertaken by an entire journal group. Cell Press has just restructured their methods section to make it “STAR: Structured Transparent Accessible Reporting”-compliant. This of course includes RRIDs!

http://dx.doi.org/10.1016/j.cell.2016.08.021

The idea is that authors create a list of research resources in a table helping to keep track of all the “ingredients one needs to replicate the study” and echoes the NIH language of Rigor and Transparency. This will be a real boon for reproducible science!

Some papers using the new format are already out from Cell:

http://www.sciencedirect.com/science/article/pii/S009286741631011X

http://www.sciencedirect.com/science/article/pii/S0092867416309953

http://www.sciencedirect.com/science/article/pii/S0092867416309321

We LOVE structured methods!

Your grandmother is much better at open reproducible science than you!

Posted on July 15th, 2016 in Anita Bandrowski, News & Events, NIFarious Ideas | No Comments »

Yes you read it correctly, I am calling you out on your ability to do open reproducible science.

This 3 minute video should convince you:

Screen Shot 2016-04-26 at 10.03.08 AM

If not, then leave a comment!

Protect Yourself from Zombie Papers

Posted on April 25th, 2016 in Anita Bandrowski, Data Spotlight, NIFarious Ideas | No Comments »

Another fun flier to post around the department.

Zombification of papers: the inability to use or validate information in the paper.
How can we stop this terrible plague on the scientific literature? – RRIDs help get the Key biological reagents identified and authenticated.

Feel free to print this fun flier and post it on your office door!

Screen Shot 2016-04-25 at 9.31.23 AM

RRID: Improve your Impact Factor!

Posted on April 25th, 2016 in Anita Bandrowski, News & Events, NIFarious Ideas | No Comments »

Please feel free to take this fun flier and post it around your lab to help your lab-mates to remember how to get an RRID into your next methods section or grant application.

Screen Shot 2016-04-25 at 9.31.23 AM

Brain Health Registry

Posted on March 3rd, 2014 in Anita Bandrowski, News & Events, NIFarious Ideas | No Comments »

The Brain Health Registry — led by researchers at UCSF — is a groundbreaking, web-based project designed to speed up cures for Alzheimer’s, Parkinson’s and other brain disorders.  It uses online questionnaires and online neuropsychological tests (which are very much like online brain games). It can make clinical trials — which are needed to develop cures — faster, better and less expensive.

 

The project is scheduled for a public launch in the spring, but we’re inviting you to be among the first to participate and provide feedback.

 

Click here to see our website and get more information about the Brain Health Registry.

  • It’s easy. It takes a few minutes to sign up and less than 3 hours per year. And it’s all done online, so you can do it from home — or anywhere you have Internet access.
  • It offers a breakthrough. 85% of clinical trials have trouble recruiting enough participants. By creating a large online database of pre-qualified recruits, The Brain Health Registry can dramatically cut the cost and time of conducting clinical trials. This is the first neuroscience project to leverage online possibilities in this way and on this scale.
  • It’s meaningful. With every click of the mouse, you help researchers get closer to a cure for Alzheimer’s and other brain diseases. If Alzheimer’s runs in your family, this may be an important gift to your loved ones.
  • It’s safe. Top scientists from some of the most respected institutions in medicine are leading the Brain Health Registry. They understand your need for privacy, and they will protect it at every step of the way.

We’re currently in our pre-launch phase.  Try it out!  If you offer feedback – and we hope you do – we will read it, consider it carefully, and respond to you directly.

 

As an early adopter, you can help us in two ways.  You can help in the way all members can help — by answering the questionnaires and taking the online brain tests, you strengthen the database that the scientific community needs.  You can also help us improve our new website – we’ll be making many changes, based on your feedback, before our public launch.

 

Please take the time to visit our sight, sign up and offer your feedback.

Resource Identification Initiative – AntibodiesOnline is giving away “nerd mugs” to help identify antibodies in your paper.

Posted on December 16th, 2013 in Anita Bandrowski, News & Events, NIFarious Ideas | No Comments »

Dear NIF Community members;

Our partners in crime, FORCE11, are hosting a working group, the Resource Identification Initiative, that is working with journals to make it easier to identify research resources used in the materials and methods of biomedical research through the use of unique identifiers.  For the pilot project, we are concentrating on antibodies, genetically modified organisms and software.  Our goal is to make identification of research resources:  1)  machine-processable;  2)  available outside any paywall;  3)  uniform across journals.  More information can be found at:  http://www.force11.org/Resource_identification_initiative

Our colleagues at Antibodies On Line have set up a beta testing site specifically for antibodies: http://www.antibodies-online.com/resource-identification-initiative/

If you use antibodies in your research, or know those that do, please help us test the tools and provide feedback.  Antibodies Online is generously providing nerd mugs and shirts to those who participate (they make great Holiday gifts!).

Do you know what you don’t know? A gap analysis of Neuroscience Data.

Posted on October 17th, 2013 in Anita Bandrowski, Data Spotlight, Inside NIF, NIFarious Ideas | No Comments »

My thesis adviser, a colorful spirit and one whose wisdom will long be missed, used to say that undergraduate or professional students differed from graduate students in that they were asked to learn what was known about a subject, while graduate students were asked to tackle the unknown.

We, in higher education, are essentially seeking to find out what is not known and start to come up with new answers. How does one find out what is not known? In fact, is it possible to do that? Don’t most graduate students or postdocs add onto a lab’s existing body of knowledge? Adding to the unknown by building on the known? If this is how we work then does this create a very skewed version of the brain? How would we even know what is truly unknown?

Now we enter the omics era, where we try to find out all things about a set of things. We no longer want to know about a gene, we want to know about all of the genes, the genome of an organism. We want to account for all things of the type DNA and figure out which parts do what. In neuroscience, this tends to be a little more difficult. Mainly because we do not have a finite list of things that we can account for. We have a large quantity of species with brains, or at least ganglia, we have billions of cells and many more connections between them in a single human brain. The worst part is that these connections are not even static so a wiring diagram is only good for a few minutes for a single brain and then the brain reorganizes some of these connections.

Is the hope for an “omics” approach to neuroscience?

Well, the space is not infinite and has been studied over the last 100+ years so we have some ways of getting at the problem. We have a map!
Can we use this map to figure out some basic information about what we do and do not study? Well, the short answer at least for some things seems to be yes!

The Neuroscience Information Framework (neuinfo.org) project has been aggregating data of various sorts that is useful to neuroscientists, and also a set of vocabularies for all of the brain parts, the map of the nervous system. So we can start to look at which labels are used for tagging data, and which are found in the literature? Are all parts of the brain equally represented by relatively even amounts of data or papers or are there hot spots and cold spots for data?

Below is a heat map generated using the Kepler tool for data sources vs brain regions across the canonical brain regions (a hierarchy built to resemble what one may find in a graduate level text book of neuroanatomy).
Screen Shot 2013-10-17 at 1.28.39 PM

Albeit the heat map is very hard to read (the darker the green the more data, you can generate your own by clicking on the graph icon in NIF), there is little doubt that all brain regions are not equal, and some have very little data, while others have a plethora of data begging the question: Are there popular brain regions and not-so-popular brain regions?

Screen Shot 2013-10-28 at 8.38.30 AM

Indeed, there are brain region annotations that are found more often, when looking at data and much like pop stars, they tend to have shorter names. The most popular data label is actually brain, and the least popular appears to be the Oculomotor nerve root. This is starting to tell us that most data is just labeled as “brain vs kidney”, but can we do better as neuroscientists? In fact, we can break down the labels into major regions like hindbrain, midbrain and forebrain and add up all of the data that fit into each of these. Note most of the data are attributed to the forebrain, housing some of the most popular brain regions such as the cerebral cortex and the hippocampus, but the hindbrain also comes back with some reasonable data, mainly for the cerebellum. It turns out that adding up all the data labels for midbrain regions results in an awkward sense that the midbrain may be completely non-essential to brain research. On the other hand, removing the midbrain appears to be essential to life, so why do neuroscientists not know much or at least publish much about the midbrain?

Screen Shot 2013-10-17 at 3.57.58 PM

So if you are hiding a big pile of data about the midbrain in your desk drawer, I would like to formally ask you to share it with NIF (just email info@neuinfo.org) so that I can stop thinking of the midbrain as the tissue equivalent of fly-over country.

 

Resource Identification Guidelines – now at Elsevier

Posted on September 6th, 2013 in Anita Bandrowski, Curation, Interoperability, NIFarious Ideas | No Comments »

The problem of reproducibility of results has been addressed by many groups, as being due to scientists having very large data sets and highlighting the interesting, yet most likely statistically anomalous findings and other science no-no’s like reporting only positive results.

Our group, has been working to make the methods and reagents reporting better and I am happy to report that this group has been seeing resonance of these ideas.

In a group sponsored by FORCE11, a group of researchers, reagent vendors and publishers has been meeting to discuss how to best accomplish better reporting in all of the literature and both the NIH and publishers themselves are now becoming interested in their sucess. The latest and greatest evidence of this can be found on the Elsevier website, as a guideline to authors, however this will soon be followed by a pilot project to be launched at the Society for Neuroscience meeting with over 25 journals and most major publishers.

Of course there is no reason to wait for an editor to ask to put in catalog numbers or stock numbers for transgenic animals. These should be things that we are trained to do in graduate school as good practices for reporting our findings.

We seem to be getting ready to change (or change back) to a more rigorous methods reporting, which should strengthen the recently eroded credibility of the scientific enterprise. I for one, hope that the message that will be communicated is: “scientists don’t hide problems, even endemic ones, we examine them and find workable solutions”.

Top 20 – Publications – of the month!

Posted on July 22nd, 2013 in Anita Bandrowski, Data Spotlight, NIFarious Ideas | No Comments »

At NIF we try to mix it up. Instead of bringing you the top databases this month, we thought that this may be of more interest.

A heart-felt congratulations to the most annotated papers of all time! It is sort of like being the most cited, but it is more like being the most useful.

The winners are clearly publications about databases, congratulations to InterPro and Protein Data Bank, the Gene Ontology itself, and also complete genomes for various organisms. The black horse in the race asks “How many drug targets are there?” and the answer appears to be at least 7000. Heyndrickx and Vandepoele may have the most citations per human being because they wrote the only paper on this list with only 2 authors, most of the rest dilute their citations by an order of magnitude.

In any case, we thought that this would be a fun set of data to look at.

* Mulder et al The InterPro Database 2003 cited 49030 times

* Camon et al The Gene Ontology Annotation (GOA) project: implementation of GO in SWISS-PROT, TrEMBL, and InterPro 2003 cited 49030 times

* Heyndrickx and Vandepoele Systematic identification of functional plant modules through the integration of complementary data sources 2012 cited 37987 times

* Matsuyama et al ORFeome cloning and global analysis of protein localization in the fission yeast Schizosaccharomyces pombe 2006 cited 14156 times

* Barbe et al Toward a confocal subcellular atlas of the human proteome 2008 cited 9574 times

* Simmer et al Genome-wide RNAi of C elegans using the hypersensitive rrf-3 strain reveals novel gene functions 2003 cited 8249 times

* Heidelberg et al DNA sequence of both chromosomes of the cholera pathogen Vibrio cholerae 2000 cited 8192 times

* Imming et al Drugs, their targets and the nature and number of drug targets 2006 cited 7742 times

* Overington et al How many drug targets are there? 2006 cited 7664 times

* Gibbs et al Rat Genome Sequencing Project Consortium. Genome sequence of the Brown Norway rat yields insights into mammalian evolution 2004 cited 7594 times

* Berman et al The Protein Data Bank 2000 cited 7107 times

* Ceron et al Large-scale RNAi screens identify novel genes that interact with the C elegans retinoblastoma pathway as well as splicing-related components with synMuv B activity 2007 cited 6691 times

* Moran et al Genome sequence of Silicibacter pomeroyi reveals adaptations to the marine environment 2004 cited 6389 times

* Read et al The genome sequence of Bacillus anthracis Ames and comparison to closely related bacteria 2003 cited 6155 times

* Young et al Odorant receptor expressed sequence tags demonstrate olfactory expression of over 400 genes, extensive alternate splicing and unequal expression levels 2003 cited 5935 times

* Buell et al The complete genome sequence of the Arabidopsis and tomato pathogen Pseudomonas syringae pv tomato DC3000 2003 cited 5518 times

* Heidelberg et al Genome sequence of the dissimilatory metal ion-reducing bacterium Shewanella oneidensis 2002 cited 5183 times

* Methé et al Genome of Geobacter sulfurreducens: metal reduction in subsurface environments 2003 cited 4084 times

* Nelson et al Whole genome comparisons of serotype 4b and 1/2a strains of the food-borne pathogen Listeria monocytogenes reveal new insights into the core genome components of this species 2004 cited 3881 times

* Ward et al Genomic insights into methanotrophy: the complete genome sequence of Methylococcus capsulatus (Bath) 2004 cited 3877 times

The data are composed of annotations aggregated from over 40 individual databases and the Gene Ontology Consortium. For a complete and current list of databases included please see the NIF annotations information page, and to see a complete list of current annotations see the NIF annotations complete data set.

Note, these numbers are accurate for July 20th, 2013, but may change as data are added.