RRID’s are in the wild! Thanks to JCN and PeerJ

Posted on April 9th, 2014 in Anita Bandrowski, Curation, Essays, News & Events | No Comments »

We believe that reproducing science starts with being able to know what “materials” were used in generating the results.

Along with a truly dedicated group of volunteers from academia, government and non-government institutes, publishers and commercial antibody companies we have been running the Resource Identification Initiative (RII).

This initiative is meant to accomplish the following lofty goal: Ask authors to uniquely identify their antibodies (no easy task), organisms (an even harder task), and the databases and software tools that they used in their paper.

In order to ask them at the appropriate time, we gathered a group of journal chief editors to help us ask this question when authors are most interested in answering the question during the process of publication. We created many things to help them identify these things such as a database that stores information for 5 of the most common species used in experiments, antibody catalogs from over 200 vendors, and a database and tool catalog that contains over 3000 software tools and over 2500 academic databases, the largest of its’ kind.

We have been granted 3 months to determine if authors would actually do this. It has been two months, we fielded requests from about 30 users who could not find their resources, there have been more than 40 new software tools or databases registered to our tools registry, and more than 100 antibodies, but we kept waiting for RRIDs to show up in the literature.

Today our wait is over thanks to two papers, Khalil and Levitt in the Journal of Comparative Neurology and Joshi et al in PeerJ.

These authors apparently were able to correctly identify resources such as Matlab, NeuroLucida, ProteinDataBank and antibodies including anti-cholera toxin antibody from List Bio.

What does this tell us?

Well to start that this process is not impossible! That identifiers do exist for many things or the process of obtaining new ones is not so difficult that people can’t do this. It also tells us that when asked at the right time, authors are willing to go the extra step, find and provide identifiers to their reagents or software tools!

Great, but why do I care about a single paper that uses an antibody or Matlab?

Well, it turns out that for many years JCN and NIF staff have been working diligently to link papers through that same identifier so in the case of this cholera toxin antibody we have marked 23 other papers that have used it since 2006.

Screen Shot 2014-04-11 at 1.04.40 PM

 

 

 

Neuroinformatics 2014: abstract submission extended to April 27

Posted on April 8th, 2014 in Anita Bandrowski, Author, General information, News & Events | No Comments »

The yearly INCF Congress provides a meeting place for researchers in all fields related to neuroinformatics. This year, the congress will take place August 25-27 in Leiden, Netherlands.


Keynotes will be given by:

- Margarita Behrens, Salk Institute, La Jolla, CA, USA
“The epigenome and brain circuit changes during postnatal development”
- Dmitri (Mitya) Chklovskii, Howard Hughes Medical Institute Janelia Farms, Ashburn, USA
“Can connectomics help us understand neural computation? Insights from the fly visual system”
- Daniel Choquet University of Bordeaux, France
“A nanoscale view into the dynamic of AMPA receptor organization in synapses”
- Ila Fiete, University of Texas, USA
“Neural codes for representation and memory”
- Michael Milham, Child Mind Institute, New York, USA
“Emerging models for biomarker identification”
- Felix Schürmann, École Polytechnique Fédérale de Lausanne, Switzerland
“In silico neuroscience – an integrative approach”

LINKS

Submit your abstract here, latest April 27: http://www.neuroinformatics2014.org/abstracts

Registration is open at http://www.neuroinformatics2014.org/registration

Watch the NI2014 promo video: http://www.youtube.com/watch?v=aEudq3SOwK0

The congress poster is available for download at http://www.neuroinformatics2014.org/documents/a0_poster_nov_2013

The International Neuroinformatics Coordinating Facility (INCF) is an international organization launched in 2005, following a proposal from the Global Science Forum of the OECD to establish international coordination and collaborative informatics infrastructure for neuroscience – and currently has 17 member countries across North America, Europe, Australia and Asia. INCF establishes and operates scientific programs to develop standards for neuroscience data sharing, analysis, modeling and simulation while coordinating an informatics infrastructure designed to enable the integration of neuroscience data and knowledge worldwide and catalyze insights into brain function in health and disease.

Open Science? Try Good Science.

Posted on April 7th, 2014 in Author, Curation, Essays, Maryann Martone, News & Events | 1 Comment »

If the Neuroscience Information Framework is any guide, we are certainly in an era of “Openness” in biomedical science.  A search of the NIF Registry of tools, databases and projects for biomedical science for “Open” leads to over 700 results,  ranging from open access journals, to open data, to open tools.  What do we mean by “open”?  Well, not closed or, at least, not entirely closed.  These open tools are, in fact, covered by a myriad of licenses and other restrictions on their use.  But, the general theme is that they are open for at least non-commercial use without fees or undue licensing restrictions.

Open Science Share button

So, is Open Science already here?  Not exactly.  Open Science is more than a subset of projects that make data available or sharing of software tools, often because they received specific funding to do so.  According to Wikipedia, “Open science is the umbrella term of the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.”   Despite the wealth of Open platforms, most of the products of science, including, most notably, the data upon which scientific insights rests, remain behind closed doors.  While attitudes and regulations are clearly changing, as the latest attempts by PLoS to establish routine sharing of data illustrate (just Google #PLOSfail), we are not there yet.

Why are so many pushing for routine sharing of data and a more open platform for conducting science?  I became interested in data sharing in the late 1990’s as a microscopist as we started to scale up rate and breadth at which we could acquire microscopic images.  Suddenly, due to precision stages and wide field cameras, we were able to image tissue sections at higher resolution over much greater expanses of tissue than before, when we were generally restricted to isolated snapshots or low magnification surveys.   I knew that there was far more information within these micrographs and reconstructions than could be analyzed by a single scientist.  It seemed a shame that they were not made more widely available.  To help provide a platform, we established the Cell Centered Database, which has recently merged with the Cell Image Library.  Although we were successful in the CCDB in attracting outside researchers to deposit their data, we were rarely contacted by researchers wanting to deposit their data. most of the time we had to ask, although many would release the data if we did.  But I do distinctly remember one researcher saying to me:  “I understand how sharing my data helps you, but not me”.

True.  So in the interest of full disclosure, let me state a few things.  I try to practice Open Science, but am not fanatical. I try to publish in open access journals, although I am not immune to the allure of prestigious closed journals.  I do blog, make my slides available through Slide Share, and upload pre-prints to Research Gate.  But I continue to remain sensitive to the fact that through my informatics work in the Neuroscience Information Framework and my advocacy for transforming scholarly communications through FORCE11 (the Future of Research Communications and e-Scholarship), I am now in a field where:  A)  I no longer really generate data.  I generate ontologies and other information artefacts, and these I share, but not images, traces, sequences, blots, structures;  B)  I do benefit when others share their data, as I build my research these days on publicly shared data.

But do I support Open Science because I am a direct beneficiary of open data and tools?  No.  I support Open Science because I believe that Open Science = Good Science.  To paraphrase Abraham Lincoln:  “If I could cure Alzheimer’s disease by making all data open, I would do so;  if I could cure Alzheimer’s disease by making all data closed, I would do so.”  In other words, if the best way to do science is the current mode:  publish findings in high impact journals that only become open access after a year, make sure no one can access or re-use your data, make sure your data and articles are not at all machine-processable, publish under-powered studies with only positive results, allow errors introduced by incorrect data or analyses to stay within the literature for years, then I’m all for it.

But, we haven’t cured Alzheimer’s disease or much else in the neurosciences lately.  That’s not to say that our current science, based on intense competition and opaque data and methods, has not produced spectacular successes.  It surely has.  But the current system has also led to some significant failures as well, as the retreat of pharmaceutical companies from neuroscience testifies.  Can modernizing and opening up the process of science to humans and machines alike accelerate the pace of discovery?  I think we owe the taxpayers, who fund our work in hope of advancing society and improving human health, an honest answer here.   Are we doing science as well as it can be done?

I don’t believe so.  And, as this is a blog and not a research article, I am allowed to state that categorically.  I believe that at a minimum, Open Science pushes science towards increased transparency, which, in my view, helps scientists produce better data and helps weed out errors more quickly.  I also believe that our current modes of scientific communication are too restrictive, and create too high a barrier for us to make available all of the products of our work, and not just the positive results.  At a maximum, I believe that routine sharing of data will help drive biomedical sciences towards increased discovery, not just because we will learn to make data less messy, but because we will learn to make better use of the messy data we have.

Many others have written on why scientists are hesitant or outright refuse to share their data and process  (see #PLOSfail above) so I don’t need to go into detail here.  But at least one class of frequent objections has to do with the potential harm that sharing will do to the researcher who makes data available.  A common objection is that others will take advantage of data that you worked hard to obtain before you can reap the full benefits.  Others say that there is no benefit to sharing negative results, detailed lab protocols or data, or blogging, saying that it is more productive for them to publish new papers than to spend time making these other products available.   Others are afraid that if they make data available that might have errors, their competitors would attack them and their reputations would be tarnished.  Some have noted that unlike in the Open Source Software community, where identifying and fixing a bug is considered a compliment, in other areas of scholarship, it is considered an attack.

All of these are certainly understandable objections.  Our current reward system does not provide much incentive for Open Science, and changing our current culture, as I’ve heard frequently, is hard.  Yes it is.  But if our current reward system is supporting sub-optimal science, then don’t we as scientists have an obligation to change it?  Taxpayers don’t fund us because they care about our career paths.  No external forces that I know of support, or even encourage, our current system of promotion and reward:  it is driven entirely by research scientists.  Scientists run the journals, the peer-review system, the promotion committees, the academic administration, the funding administration, the scientific societies and the training of more scientists.  Given that non-scientists are beginning to notice, as evidenced by articles in the Economist (2013) and other non-science venues about lack of reproducibility, perhaps it’s time to start protecting our brand.

While many discussions on Open Science have focused on potential harm to scientists who share their data and negative results, I haven’t yet seen discussions on the potential harm that Opaque Science does to scientists.  Have we considered the harm that is done to graduate students and young scientists when they spend precious months or years trying to reproduce a result that was perhaps based on faulty data or selective reporting of results?  I once heard a heart-breaking story of a promising graduate student who couldn’t reproduce the results of a study published in a high impact journal.  His advisor thought the fault was his, and he was almost ready to quit the program.  When he was finally encouraged to contact the author, he found that they couldn’t necessarily reproduce the results either.   I don’t know whether the student eventually got his degree, but you can imagine the impact such an experience has on young scientists.   Beyond my anecdotal example above, we have documented examples where errors in the literature have significant effects on grants awarded or the ability to publish papers that are in disagreement (e.g., Miller,  2006).  All of these have a very real human cost to science and scientists.

On a positive note, for the first time in my career, since I sipped the Kool Aid back in the early days of the internet, I am seeing real movement by not just a few fringe elements, but by journals, senior scientists, funders and administrators, towards change.  It is impossible to take a step without tripping over a reference to Big Data or metadata.  Initiatives are underway to create a system of reward around data in the form of data publications and data citations.  NIH has just hired Phil Bourne, a leader in the Open Science movement, as Associate Director of Data Science.  And, of course, time is on our side, as younger scientists and those entering into science perhaps have different attitudes towards sharing than their older colleagues.   Time will also tell whether Open Science = Good Science.  If it doesn’t, I promise to be the first to start hoarding my data again and publishing only positive results.

References:

Economist, How Science Goes Wrong, Oct 19, 2013

Miller, G.  (2006) A scientist’s nightmare: software problem leads to five retractions.  Science, 22, 314, pp 1856-1857.

 

Blog originally posted to Wiley Exchanges.

New Course at Princeton: Neurotechnologies for Analysis of Neural Dynamics

Posted on April 2nd, 2014 in Anita Bandrowski, Author, News & Events | No Comments »

New intensive 4-week summer course, Neurotechnologies for Analysis of Neural Dynamics (NAND) designed to introduce physicists, mathematicians, engineers and computer scientists to the major questions and techniques of modern neuroscience, with a special emphasis in both lecture and laboratory components on neurotechnologies, ranging from large scale electrode and optical recording (and optogenetic stimulation) to mathematical analysis of neural dynamics within the datasets produced by these methods.

The course is described in detail at nand.princeton.edu.

Supported by a grant from the Burroughs Wellcome Foundation allows us to meet the full financial needs of all admitted students.

Brain Health Registry

Posted on March 3rd, 2014 in Anita Bandrowski, News & Events, NIFarious Ideas | No Comments »

The Brain Health Registry — led by researchers at UCSF — is a groundbreaking, web-based project designed to speed up cures for Alzheimer’s, Parkinson’s and other brain disorders.  It uses online questionnaires and online neuropsychological tests (which are very much like online brain games). It can make clinical trials — which are needed to develop cures — faster, better and less expensive.

 

The project is scheduled for a public launch in the spring, but we’re inviting you to be among the first to participate and provide feedback.

 

Click here to see our website and get more information about the Brain Health Registry.

  • It’s easy. It takes a few minutes to sign up and less than 3 hours per year. And it’s all done online, so you can do it from home — or anywhere you have Internet access.
  • It offers a breakthrough. 85% of clinical trials have trouble recruiting enough participants. By creating a large online database of pre-qualified recruits, The Brain Health Registry can dramatically cut the cost and time of conducting clinical trials. This is the first neuroscience project to leverage online possibilities in this way and on this scale.
  • It’s meaningful. With every click of the mouse, you help researchers get closer to a cure for Alzheimer’s and other brain diseases. If Alzheimer’s runs in your family, this may be an important gift to your loved ones.
  • It’s safe. Top scientists from some of the most respected institutions in medicine are leading the Brain Health Registry. They understand your need for privacy, and they will protect it at every step of the way.

We’re currently in our pre-launch phase.  Try it out!  If you offer feedback – and we hope you do – we will read it, consider it carefully, and respond to you directly.

 

As an early adopter, you can help us in two ways.  You can help in the way all members can help — by answering the questionnaires and taking the online brain tests, you strengthen the database that the scientific community needs.  You can also help us improve our new website – we’ll be making many changes, based on your feedback, before our public launch.

 

Please take the time to visit our sight, sign up and offer your feedback.

Resource Identification Quarter is Rapidly Approaching

Posted on January 27th, 2014 in Anita Bandrowski, Interoperability | No Comments »

What is resource identification quarter?
At the 2012 Society for Neuroscience meeting, NIF met with the editors-in-chief of about 25 neuroscience journals attempting to convince them that research resources (software tools, antibodies, and model organisms) should be treated as first class research objects and cited appropriately. At a follow up meeting at the National Institutes of Health, the journal editors agreed to start a pilot project to identify these resources using a uniform standard.

Why should we identify research objects?
The neuroscience literature unfortunately does not contain enough information for anyone to find many research resources. In a very typical paper by Paz et al, 2010 an antibody is referred to as “GFAP, polyclonal antibody, Millipore, Temecula, CA.” If someone tried to find the antibody in 2010, they would see that there were 40 antibodies at Millipore, today after merging with EMD, the catalog contains 51 antibodies for GFAP with no indication which ones may have been present before and which were new. Without even a catalog number, a researcher can potentially contact the authors or buy all of these antibodies and try them to see if any of them have a similar profile. You can imagine that at several hundred dollars per antibody and a number of weeks that would need to be spent to optimize staining, it seems no better than a shot in the dark.

What about transgenic mice, they can’t possibly be that hard to identify?
After many conversations with model organism database curators at MGI, it turns out that people’s data is often not included in the database specifically because the world experts on mice can’t tell which mouse is being used in the paper. The nomenclature of transgenic mice is somewhere between an art form and black magic, however including a simple stock number seems like a fairly simple solution of the problem. The nomenclature authorities for mice, rats, worms and flies happen to have convenient forms to ask for help or to name a new critter (the list is available here).

How should we identify research objects?
We need a set of unique identifiers, like Orcid ids or social security numbers, of the research resources. NIF has created a web page for authors to find these in one, reasonably convenient location. Searching for information that authors should have such as the catalog number for an antibody or a stock number for an organism, should give a result in the appropriate tab and a rather large “cite this” button will appear with the appropriate citation style.
A more detailed set of instructions and the search box can be found here: www.scicrun.ch/resources

So if you are considering submitting a paper over the next few months to one of the journals below, you will be asked to include a unique identifier for your research objects. We certainly hope that this is reasonably easy to accomplish and eagerly await a time when we would be able to ask the question: which antibody did Paz et al actually use on which transgenic mouse?

If you would like to see if the antibodies you used in any of your previous papers can also be annotated with the unique identifiers, we will be happy to help. Our friends at antibodies online are giving away coffee mugs and t-shirts to help get people interested in doing so. To annotate your paper you can go through their survey at:
http://www.antibodies-online.com/resource-identification-initiative/ This will earn a free mug and the data will be included in the antibody registry for posterity.

List of Participating Journals:
Annals of Neurology
Brain
Amer Journal of Human Genetics
Behavioral and Brain Functions
Biological Psychiatry
BMC Neurology
BMC Neuroscience
Brain and Behavior
Brain Cell Biology
Brain Structure & Function
Cell
Cerebral Cortex
Developmental Neurobiology
Frontiers in Human Neuroscience
Frontiers in Neuroinformatics
Hippocampus
J. Neuroscience
Journal of Comparative Neurology
Journal of Neuroinflammation
Journal of Neuroinformatics
Journal of Neuroscience Methods
Molecular Brain
Molecular Pain
Nature
Nature Neuroscience
Neural Development
Neuroimage
Neuroscience

The NIA Butler-Williams Scholars Program (formerly Summer Institute on Aging Research) is accepting applications for an intensive introduction to aging research

Posted on January 24th, 2014 in Anita Bandrowski, General information | No Comments »

This workshop for investigators new to aging research is focused on the breadth of research supported by the National Institute on Aging, including basic biology, neuroscience, behavioral and social research, geriatrics and clinical gerontology. As an offering through the Office of Special Populations, the content will include a focus on health disparities, research methodologies, and funding opportunities. The Butler-Williams Scholars Program (B-W Scholars) is one of the premier short-term training opportunities for new investigators. New researchers are defined as those who have recently received the M.D., Ph.D. or other doctoral level degree. The B-W Scholars Program affords participants with unparalleled access to NIA and NIH staff in an informal setting.

The B-W Scholars Program is sponsored by NIA with support from the National Hartford Centers of Gerontological Nursing Excellence.

The 2014 B-W Scholars Program will be held August 4-8, 2014 in Bethesda, Maryland. Support in most cases is available for travel and living expenses.

Applications are due March 28, 2014.

Researchers with an interest in health disparities research are encouraged to apply. Applicants from diverse backgrounds, including individuals from underrepresented racial and ethnic groups, individuals with disabilities and women are always encouraged to apply for NIH support. Applicants must be U.S. citizens, non-citizen nationals, or permanent residents.

Please view more information on the NIA web site: www.nia.nih.gov/about/events/2013/butler-williams-scholars-program-2014

Please feel free to circulate the above message to potential applicants.

For more information, please contact:

Ms. Andrea Griffin-Mann
Office of Special Populations
National Institute on Aging
National Institutes of Health
griffinmanna@mail.nih.gov

The ABA and Gensat expression data has been substantially compared and it does not match up all that well

Posted on January 21st, 2014 in Anita Bandrowski, Data Spotlight | No Comments »

An article by Zaldivar and Krichmar discusses a comparison and alignment of data (never an easy thing) between the Allen Mouse Brain Atlas and Gensat data, mainly. This article deals with the issue of data alignment and a real look at the two resources, gathering data from different sources with ABA a high throughput technique and Gensat was largely used for the myriad of ISH studies they had gathered.

The two sources are eerily different, see table 2 below.
429_2012_473_Tab2_HTML
Note, green means alignment, blue and red seem to dominate though!

My question when looking at something like this is how much overlap should we expect?
If there is not much overlap, then what conclusions should we make based on any one gene expression study?

By the way, the authors used the Neuroscience Information Framework to search gensat, but you can do your own comparison because NIF has an “integrated view” giving a quick overview of the data from Gensat, MGI and ABA.

Free symposium covering neuroscience data integration at UCLA, March 11, 2014

Posted on January 17th, 2014 in Anita Bandrowski, News & Events | 2 Comments »

SYMPOSIUM: “Tools for integrating and planing experiments in neuroscience;
March 11, 2014, NRB auditorium UCLA;

http://www.iclm.ucla.edu/events/planing.html

TOPIC: The increasing volume, complexity and interconnectedness of published studies in neuroscience make it difficult to determine what is known, what is uncertain, and how to contribute effectively to one’s field. The speakers will present ideas and tools to tackle this urgent problem.

REGISTRATION is free but required: http://www.iclm.ucla.edu/contact-form-2/Registration.php

The 2014 Series of PACE Data Mining Boot Camps Kicks Off on February 26-27

Posted on January 4th, 2014 in Anita Bandrowski, General information, News & Events | No Comments »

Each day, our society creates 2.5 quintillion bytes of data (that’s 2.5 followed by 18 zeros). Conventional statistical analysis and business intelligence software are not designed to capture, curate, manage and process large quantities of data generated by most enterprises.

The PACE Boot Camps provide the Big Data community with conceptual and hands-on training. Learn the critical predictive data analytics techniques and tools that contribute to accurate, actionable and agile insights. Boot Camp training includes –

• Overview: Data Mining, Machine Learning, and Statistics
• Overview of CRISP-DM: Cross Industry Standard Process for Data Mining
• Introduction to Data Mining Tools
• Preprocessing the Data
• Learning Algorithms Implementations
• Model Evaluation and Validation
• Data Mining Trends, Applications and Guidelines

The Boot Camps are held at the San Diego Supercomputer Center on the campus of UC San Diego.

REGISTER NOW FOR THE 2014 SERIES OF BOOT CAMPS @ pace.sdsc.edu/boot-camps

FOR MORE INFORMATION:
paceinfo@pace.sdsc.edu
858.534.8321
pace.sdsc.edu