Archive for the ‘Curation’ Category

Supplementary material: Can NIF bring order to the netherworld of publishing?

Posted on September 3rd, 2010 in Anita Bandrowski, Curation, Essays, Force11, General information | 1 Comment »

Publishing papers is one of the mainstays of scientific discourse.  Many of us worry about the impact factor of journals, how many times we have been cited, ease of access to our papers.  The the mantra has historically been ‘publish or perish,’ and has recently shifted to ‘get funding or perish,’ but that is another blog post altogether.

In recent years we have had a great deal more data to publish or, at the very least, have been given the opportunity to include more than just a “representative figure,” a trend that growth of databases and “supplemental materials” as repositories for information has been largely responsible for.  This growth is a good thing; it has allowed the publishing of enormous studies of microarray datasets that test thousands of genes in a single experiment.  Where before researchers could only discuss a small subset of the studied genes, they can now make the entirety of their data freely available, making it (theoretically) disseminated and discoverable.

The problem with supplemental data is that it is not indexed by PubMed, and even its owners have difficulty finding it (personal communication, Nestler 2010; also see 2009 Nature editorial by Guralnick).  Furthermore, there are few standards for formatting these data during publication, resulting in a significant heterogeneity in the data formats.  For example, when extracting tables that report microarray data for a project within the Neuroscience Information Framework (NIF) called the Drug Dependent Gene database, some of the tables were in the form of excel spreadsheets, some were PDF files, and some were simply .jpg or .tiff files.  Some of the tables did not have titles, and many did not link back to their original papers, meaning that even if they were indexed by Google or another search engine,  there would be no way to determine the actual context of the file.

Recently, the Journal of Neuroscience considered having researchers keep supplemental material on their own websites. All of our experiences at the NIF indicate that this is likely to be disastrous.  NIF is in the business of maintaining links to websites, yet even we often lose them because webmasters rename directories in the process of instituting a “sleek, new design,” and don’t leave behind permanent forwarding addresses.  The only difference may be the order of the words within a link, but even the most minor changes break links and necessitate a human having to ‘fix the link’ by hand.  For a few links this is not a difficult task, but for thousands? It gets a little difficult.  Sweeping changes in directory organization is something that we have long advocated against, but when it came to redesigning the NIF pages, we committed the cardinal sin of changing our directory structure and have had to go back to create redirects to all of the new pages so that people linking to us would land in the appropriate location.

At the NIF, we realize the tremendous value of supplementary data in both increasing the impact of a particular article, as well as a tool to reduce unnecessary duplication of experiments, and would like to share a few thoughts on the subject:

1. We advocate the submission of data to the appropriate databases (Gemma, GEO or DDG for microarray data, CCDB for microscopic images etc).  If you have a set of data that do not conform, please ask us where it could go at our forum or at our email list.

2. We advocate the creation of one or more data warehouses that would permanently store the data that don’t fit into an existing database.  Should libraries take these data sets and store them for us the way that they handle books?

3. We strongly advocate the use of a standard vocabulary to normalize the data submitted to any such database.  Doing so would allow data to be integrated into the extensively developed semantic matrix of NIF, allowing it to be discovered more easily by the researchers using our portal, but also by automated agents deployed for text mining purposes.  It is yet to be determined whether text mining technologies will produce useful answers to the real scientific questions, but making our data available to them in a format that is easy to digest seems a reasonable step.

NIF currently provides search tools across a wide variety of data sets and would provide efficient means of searching the landscape of misfit data sets, as well as a way to link back to the primary articles, widening the impact of research groups and authors.  Moreover, including hard-won supplementary data will promote the rapidly growing knowledgebase within NIF, a semantic knowledgebase that can be discovered, explored and accessed by any student, scientist or interested researcher via the internet.

We are currently in development of submission protocols and web-based tools that will automate and ease the process to submit microarray and immunohistochemistry data to the DrugDependentGeneDB.  However, would love to hear your thoughts or ideas in regards to this or any NIF matter – please feel free to leave a comment, become our friend, or send an email.

If a Tree Falls In The Digital Forest, Does It Make a Sound?

Posted on July 16th, 2010 in Anita Bandrowski, Curation, Essays, Force11, General information | No Comments »

By Anita Bandrowski, Ph.D.

Humanity began writing on stone and clay tablets, and then moved to papyrus, paper, and now we write with electrons.  Does it seem that our media for information storage is becoming more flimsy or is it better to search through piles of electrons than card catalogs?  How can we save the wonderful work that we are all paying for (in the form of government funded research)?  Do database records hold the same value as published papers?  If so, how can we maintain them indefinitely?  Should there be a paper version of each database?  How can cloud computing, the linked data/open data initiatives help?  What is the role of libraries in this sort of data landscape?

In my own experience, working on a semantic web project called the Neuroscience Information Framework (NIF) at the University of California San Diego, I noticed something strange that has happened to our society that bears on these questions.  For several months my desk was housed among many others in one of those open workspaces whose explicit goal is to improve communication between the individuals (no cubicles). One day there was an interruption in the wireless service in the building.  This interruption resulted in the inevitable frustration of “I can’t do what I was just doing,” but then a tremendous event occurred: these strange entities who had been toiling near me and whose existence I acknowledged with a nod each morning became real humans.  The amazing awakening resembled an episode of Star Trek where the Borg, a half machine half biological group fully integrated in the hive mind suddenly lost connectivity to the hive and were bumbling around, very confused.  People all around me began waking up from the technology trance and started to act more like … people.  They greeted me, we exchanged opinions of the wireless services, and we met.

With my Borg experience in mind, questions of our deep dependence on technology crystallized.  What if the power went off on wikipedia?  What if google didn’t exist?  How would I find things?  How would I be able to work without google docs?  In this networked world, is it possible that we can’t survive without the collective?

The level of integration of online information and search systems with our lives has become very eerie, to say the least.

As scientists do we have the same issues?  Can’t we do research without PubMed?  A few years ago while at Stanford, a colleague and I were talking to an art historian and the conclusion of the discussion was, “if it (a scientific paper or a piece of data) does not exist on the web, then it does not exist”.  Something quite contrary to the experience of the art historian, who apparently still did research in a physical building that contained actual papers, books, and non-digital versions of art.

So, then, who backs up the data that we are becoming completely dependent on?  When researchers move to a new university or pass onto the great beyond, what happens to the data stores that they maintained?  Do they take their data with them setting up cloud computing operations?

The good news is scientific data in databases, whether or not its published on paper, is backed up and data are regularly checked for integrity at most sites.  Data and software tools are also replicated in so called “mirrors”, which are essentially copies of the same data or software tools that serve a particular community.   Additionally, the National Library of Medicine copies and stores many of the significant databases in their systems, allowing researchers to access them and storing a digital copy for posterity. For example, the Gensat project data exist on Rockefeller servers, but also a mirror of the data is set up at NCBI (the electronic national library of medicine and the home of PubMed).

This seems safe enough. However, the directors of the National Institutes of Health are not always as willing to indefinitely support databases as they are to pay researchers to set them up.   So after five or ten years when the funding runs out, what happens to all that data that researchers painstakingly toiled for many years to gather?  Some data was published on paper, some was likely not published anywhere or pulled together from papers by raw human effort such as the Ki database, which gathered the raw numbers from many publications for affinity between drugs and receptors.  Many databases contain that elusive negative data which is not considered worthy of publishing by the ‘peer reviewing’ crowd, but which may save other researchers tremendous time if they try to replicate an experiment that several others already found did not work.  Some databases migrate to funded projects and then are maintained by other universities while the funding is in flux, but some simply vanish into the ether.  Should someone maintain them?

The experience of the private human genome project “Panther,” started by Craig Venter at Celera Inc, later Applera, later Applied Biosystems, later an unsupported project at the Stanford Research Institute, and now potentially rising from the ashes into a new project, shows that industrial data may have a similar or potentially an even more dire fate.

In recent years, several movements have swept data science. One is the open data movement and another is the linked data movement.  Both bear on this issue of data maintenance.  The linked data movement (one of the buzzwords in the semantic web community) attempts to link all pieces of related information by formal relationships, sort of like playing an enormous game of “Six Degrees of Kevin Bacon” with scientific data.  Obviously, these data sets must be openly accessible for this to work, so the open data movement spurred the creation of huge datasets readable by anyone in the world.  These data sets include some of the most valuable biomedical data, such as OMIM and PubMed, but also include wikipedia and other less than peer-reviewed data.   Lots of the people in the open data world talk about their preferred ways of storing that data, such as “tuples” or graphs, but all this boils down to a couple of main ideas:

  1. A piece of data should persist in a reliable way, with a reliable address.
  2. A piece of data should be in a format that is readable by others.
  3. A piece of data should have a unique identifier, a social security number.
  4. A piece of data is not owned by anyone, but should be traceable to its origin.

Therefore, the open data community has a vested interest in making all data available for their systems to consume and compute, including the databases whose authors, or whose authors funding, has expired.

In the model of linked data, as a ‘six degrees of Kevin Bacon’ analogy, the data graph would suffer if the record of a movie were to be wiped off the graph.  Would we still know that Tom Hanks was connected to Kevin Bacon if Apollo 13 was no longer a data link?    Probably, but the link would no longer be direct.

The problem with linked data disappearing is that the relationship between Aquaporin4 and Eric Nestler is less well established than the relationship between Tom Hanks and Kevin Bacon. Actually, a database of supplementary materials contains this connection (see Drug Dependent Gene database). Indeed, if the data are deposited inside of a database but are not central nodes of discourse they may disappear without a sound.  However, their inherent value may not be in their connectivity; it may instead be that they are valuable in a direction that few have pursued as a line of investigation, such as a promising lead for a therapeutic agent in a particular disease, or the piece of negative data that will spare another researcher a year of fruitless endeavor.

Linking Open Data (LOD) project map

The six degrees of online data sources

The stance of the Neuroscience Information Framework (NIF) as a member of the semantic web community is that data should be preserved because it may be useful at a later time.  The larger question is who will pay to preserve the data?  What is the role of libraries in an age where books are no longer made of paper, but stores of knowledge with ‘a front end’ and a ‘back end’?  Will we have thousands of databases taking up room in library basements somewhere, where they can be accessed like so many other ‘collections,’ or will projects such as NIF be the keepers of these data because they can integrate the searching of the data across data structures?  Who will champion data preservation in the digital age?

The Meaning of “Is”

Posted on April 16th, 2010 in Curation, Essays, Force11, General information, Maryann Martone | 1 Comment »

That’s an easy one, with all due respect to our former president.  As far as the NIF is concerned, “IS” is the inferior salivatory nucleus.  How do we know?

Perform a search in NIF and you will see various terms highlighted in the search results (the current highlighting color is brick red, but we are open to suggestions).   Hover over each of these highlighted terms and NIF will tell you what the term means to the NIF system.  If you hover over “IS,” NIF tells you it’s an anatomical structure. If you right click on it and ask to see “IS” in the Neurolex, it will tell you that IS is an abbreviation for the inferior salivitory nucleus.  This new feature is an example of what is often called “entity recognition.”

In the formal world of knowledge representation, an entity is that which is perceived, known, or inferred to have its own distinct existence.  For NIF, entities are those things like organisms, cells, molecules, and techniques that define our domain.  These entities are represented in the NIF ontologies.  Each entity has its own numerical identifier, sort of like a social security number, that uniquely identifies the entity.  This identifier is used to point to different ways of saying the same things to the same entity.  For example, NIF doesn’t care whether you call entity birnlex_2645, the IS, inferior salivary nucleus, or Freddy, for that matter.  They are all (and always) the same thing.

Unfortunately, the richness and complexity of our language makes recognizing entities a tricky thing, as everyone who uses a search engine knows.  Not only can we call the same entity many things, but we can call many entities the same thing.  Chances are that the IS highlighted by NIF in the search results actually is not the inferior salivatory nucleus but the third person form of the verb “to be,” or perhaps it is the initial segment of an axon or the Institute for Science.    Right now, NIF doesn’t really know.

In future releases of NIF, we will be working towards improving the accuracy of our entity recognition.  Why?  Because once we know that IS is a brain nucleus, we can find anything that is known about it:  its projections, its genes, the diseases in which it is affected.  A preview of what is coming can be seen in the NIF Cards.

IS Search

Search for IS with NIF Card

NIF cards for each entity can be viewed by right clicking over the highlighted term and selecting “Show NIF card” from the menu. NIF cards currently are only implemented for anatomical structures and cells.

For now, however, we hope you will explore the new NIF and develop an appreciation for the difficulties of semantic search by seeing what NIF thinks the results mean.  You may be surprised!

Defining Adulthood

Posted on February 3rd, 2010 in Anita Bandrowski, Curation, Force11, General information | 9 Comments »

THE PROBLEM

Adulthood, like many terms we use for describing data, is a very poorly defined and a somewhat arbitrary concept. When does an organism become an adult? The answer in general would be “it depends on how you define adult.” In the highly charged world of scientific discourse, people may argue correctly that there is no single definition of adult that would satisfy everyone or that there is a magical time point at which it occurs. The question for the Neuroscience Information Framework or any other group attempting to integrate data from many sources is not whether one group of definitions is correct, but rather whether such a concept is useful for comparing and understanding data.

To illustrate this point, MGI or the mouse genome informatics project, which is the place to go for all things mouse (from mouse strains to ontologies and genes), does not define the term adult, because of the disagreement among scientists as to what constitutes the break between juvenile and adult mice (personal communication). Of course MGI does have the “adult brain ontology”, among other resources labeled with the term adult. So they use the term as it is useful and describes a set of organismal characteristics, but are unwilling to define the term due to the ambiguities in the definitions.

Other large datasets, such as the Allen Brain Atlas do not deal with these sorts of definitions; rather they take data only from postnatal day 55 animals, which they consider safely within the adult range.

In an ideal world, we would provide a standard set of organism attributes for every subject used that is provided in a computable form, e.g., age, weight, sexual maturity. Anyone would therefore request data only from those subsets of animals that were comparable, e.g., between ages 30 days and 90 days and between 100g – 200g. Within a given resource, e.g., database, one can easily set up such a system. However, for a system like NIF that searches across broad swaths of information contained in individual databases, XML files, HTML pages and text, it is currently impossible to provide such a universal computational service on the fly even for something that should be conceptually simple, e.g., representation of age (days, months, years, prenatal, embryonic etc). Nevermind the fact that such information is not consistently available for a source.

A consideration of the literature shows that many times the only label for age is “adult” with no specifics provided.

For databases that take and analyze data from published work, like neuromorpho.org, the word adult is the only age that accurately describes a particular data set. Automated systems recognize this term, but if the definition is not constant across sources, the “adult” is not a useful bucket for aggregating information. One source may have adult as starting at P21 while another at P30. Furthermore automated systems would not be able to translate “P55” as adult, or “week 5” into adulthood unless there was a definition that could be applied.

DEFINITION OF ADULT

The question is whether we can come up with a definition of adulthood that can be consistently applied. Most of the biological definitions of adulthood deal with the readiness of an organism to reproduce, sexual maturity, or the notion that an animal is full-grown. Both definitions have inherent problems. For example, many species including male rats do not stop growing until death, making “full-size” only applicable when animals have reached their death. Similarly, sexual maturity may be defined as the onset of estrus, but can also be defined as the termination of ‘pubescence’ a period of time that is difficult to access in a rat or mouse.

Adding a little complexity to the problem is the relatively simple question of what is the day of birth. Scientists from various entrenched camps define postnatal day zero as the day of birth and others define it as postnatal day one. Neither group is incorrect, but anyone attempting to bring together data from various datasets (or publications) is required to spend a large amount of time attempting to understand whether the particular piece of data comes from an animal that is P5 or P4.

Due to the inherent problems in defining such a thing, the ontology community (a community concerned with establishing standards in discourse in scientific communication) and many researchers that build databases meant to compare data from various sources treat adulthood with caution. Nonetheless, as evidenced by its wide use, the concept of “adult” is useful and often stands alone as an important characteristic for defining data even though it is not well defined for any species.

THE ARBITRARY BUT DEFENSIBLE SOLUTION

The above-mentioned problems with defining adulthood are echoed and magnified in humans, because of a need to access emotional maturity and readiness to take on the tasks of independent existence in a complex society.   The solution to determining what an adult human is has been strangely simple and boils down to a number.  Any parent of a teenager knows that there is no magical event that happens on the 18th birthday of a child, but for legal systems a hard cut-off is needed, so that treatment of criminal activities and rights bestowed on individuals are clearly defined.  Therefore in almost all advanced societies the legal adult is 18 years of age, whether or not they are emotionally ready to be one or whether or not the pubertal period has passed.

We suggest that a similar arbitrary but defensible cut-off date should be established and implemented for all research animals so that when age of animal is reported as “adult” we can, with some degree of certainty, compare data of one study to the thousands of other similar studies.

According to the work of Finlay and Darlington (Science, 268:1578-84) with the chronometry of species, the final important steps in brain development of mice occur 29.7 days after conception, or postnatal day 12 (birth is P0 in this case), menstruation typically begins between postnatal day 25 and 40 and body growth is completed at about age postnatal day 50.  So we can use the arbitrary date of postnatal day 50 as the definition of adult mouse, as this is a reasonable standard for an adult.  We will define the day of birth as postnatal day 0.  Mice between the age of P0 and P24 will be termed juvenile and mice between P25 and P49 should be termed early adult.

IN CONCLUSION

In the NIFSTD (Neuroscience Information Framework standard ontology) we will define arbitrary but defensible standards for mice and other common research species as this sort of standard is an important part of establishing a common framework in discussion, and not necessarily dealing with the absolute scientific truth.

The reason that we need a standard for age and many other such common terms is that we need to establish a point of reference, which will allow for accurate communication about results.  This is presumably the reason that the standard international system of units (SI) was put in place and we believe in the standardization of certain common variables in experiments for the sake of effective data analysis.

Professional vs. self-curation

Posted on October 12th, 2009 in Anita Bandrowski, Curation, Essays, Force11 | 1 Comment »

Benefits and pitfalls of integration of two very different data types, by Dr. Anita Bandrowski, NIF Curator

Overview of NIF registration processes and the role of DISCO:
The Neuroscience Information Framework (NIF) project has a dynamic inventory of more than 2300 neuroscience-relevant resources. What makes that inventory dynamic is that NIF encourages resource providers to register their resource to our catalog of “all things neuroscience.”  This process is not terribly involved for resource providers as they need to fill out basic information about their resource such as the URL, name, description and keywords.  In the near future, resource providers will also be able to take away a “DISCO” file, short for resource discovery.  This file is maintained on the resource providers’ Web site.  Resource providers maintain the currency of information within this file at the source.  When a change is made, NIF is alerted to the change through an automated agent that crawls the site periodically.  In this way, resource providers do not need to provide updated information to NIF or any other system that indexes it.  The updates are performed by the system. In this way, the NIF catalog is kept up-to-date without having to visit each of the 2000+ sites currently listed.

The process of provider registration is a good idea, and we are not the only ones to think of it.  Other projects in biomedical science essentially seek to accomplish the same goal.   Of these, the Biositemaps project, supported by the National Centers for Biomedical Computation, has advanced considerably towards implementing a similar technology.   NIF believes that if providers register their resources using one of these tools, then they should not have to do it again using a slightly different tool. Rather, the data generated by all tools should be accessible to all systems.  We have just completed an exercise in harvesting Biositemaps files into NIF and provide here our experience with and perspectives on the exercise.

Rationale for integration:

Tools such as Biositemaps and DISCO allow the people who know the most about their resources, i.e., those who created them, to describe those resources so that search engines can easily find them.  This “self description” is a great idea in theory, but in practice it may not work as intended.  The NIF project, a framework for resource description and discovery, has recently developed tools to harvest the descriptions from Biositemaps.  We believe that biomedical resources should be described in a consistent manner and made discoverable so that projects similar to NIF can present them to our user community.  During this exercise, we have come across several problems that were echoed by other projects attempting to do similar things.

At the outset, the Biositemaps initiative was created as a Google sitemaps-like database that was intended to point search engines to appropriate information about biological software and data sets.  Biositemaps has a great deal of appropriate data about biologically relevant software tools. Because of this, NIF was highly interested in importing this data, which was especially enticing because the data was able to be dynamically updated by the resource providers, meaning that if the particular software tool has a new version, the search systems would be notified automatically of any update.

Metadata structure compatibility and vocabularies:
NIF has made a conscious decision to have a very simple metadata structure to alleviate problems, including the inappropriate use of metadata fields and the time intensiveness of both the curation effort and the training of curators.  The original NIF developed a fairly comprehensive structure (still available at  http://neurogateway.org; see also Gardner et al., 2008) that was populated by the resource providers themselves.  These resource providers were mostly scientists who were building tools or databases.  Many scientists are not metadata experts, and this led to a very inconsistent labeling of resources at the outset of the NIF project.  The inconsistencies in annotation made searching for resources a very difficult task; furthermore, the complicated structure was not intuitive to the end user.  The simple structure adopted by NIF [MM1]alleviated the curation and search problems and also turned out to be quite useful for integrating lots of different metadata structures, including Biositemaps.  The mapping of fields from Biositemaps to the NIF was very simple, taking only a few days to reconcile.

The most significant effort for achieving integration was the mapping the resource types, e.g., database, software tools.  Biositemaps populates the resource type from the Biomedical Resource Ontology (BRO: http://bioportal.bioontology.org/ontologies/39002), while NIF uses the NIFSTD  resource ontology.  These two efforts were developed independently but are now  converging by concerted effort of both groups.  However, during this process, they continue to have some differences.  For example, some classes exist in one ontology and not the other, e.g., core facility that is explicitly labeled in the BRO and not labeled in the NIF.  Thus, if resource providers mark their resources as a core facility, the NIF can’t automatically ingest this information, requiring intervention by a human curator.  Therefore, we have continued to align the BRO and NIFSTD as much as is humanly possible to alleviate the need for human intervention.

Data structure compatibility and scope:

While the metadata structure harmonization has taken some effort, it is a tamable exercise, but we have noticed that the data within Biositemaps supplied by resource providers is extraordinarily heterogeneous in quality.  In about 200 out of 400 Biositemaps, the data are well formed, but for the remaining records, there is partial information including missing resource names or URLs, making it difficult to take all of the data in Biositemaps and import it into NIF in an automated fashion.  The NIF registry database (as all databases) expects to see certain minimal data including a name and a URL. When these items are not present, the database does not accept the record.  Additionally, heterogeneity comes from the amount of descriptive text. NIF registry records prepared by curators have text of 3-6 paragraphs in most cases, but most Biositemaps resources describe themselves in a single sentence.  NIF uses longer descriptions because we found out early in the project that longer descriptive text includes many keywords NIF users would use for search that may not be included as keywords, making search through the NIF registry more effective.  With minimal descriptions, it is unlikely that the NIF search interface would retrieve Biositemaps resources in a sea of NIF curated resources.  Finally, the issue of combining records that are already present in NIF with Biositemaps data presented some challenges to our system.  Because we don’t yet have a universal way of assigning URI’s to resources, resources tend to be cross-listed in many catalogs.  For this reason, NIF is supporting the Common Naming Project (http://neurocommons.org/page/Common_Naming_Project).  As NIF had already provided additional curation to the resources listed that was in many cases more thorough than that supplied by the resource providers, the process of reconciling and merging of information was not straightforward. To address the problems noted above, NIF has updated the registry data structure to accommodate two versions of each record that coexist, one is a storage bin for automated data and the other the human curated version.  Any record that is publicly available in the NIF will be curated by a human, yet with automatic registration, the human curator will be prompted to review the site whenever an update occurs.

Resource characterization is a tricky problem, and it is difficult to know for a particular audience the correct way to represent a resource. For example, the Biositemaps entry for the I2B2 project (https://www.i2b2.org/; an NIH-funded National Center for Biomedical Computing containing a large amount of software resources) created individual Biositemaps for each plug-in to their software tools.  This is an issue of scope. Because NIF’s curators as a policy do not divide resources to this extent, we consider most plug-ins to be a part of the software resource, not an individual resource (there are some exceptions ,such as MATLAB libraries).  For a project such as NIF, resources need to be well defined because trying to catalog every resource useful to neuroscientists can be a daunting task if a resource is too narrow, such as a plug-in.  If we consider a resource to be appropriate for NITRC, a software library with hundreds of software applications, then it will take a curator some time to annotate this.  However if we consider each plug-in to each program a resource, the task becomes too large and is not likely to help users.  On the other hand, if a user is looking for a very specific plug-in, then having access to each individually is likely to be useful.

To solve this scope problem, we have created a uniqueness criterion for the URL, meaning that if the URL is not unique among several Biositemaps “resources,” then the resource descriptions will be folded into one.  The solution is not perfect because unrelated resources could potentially have the same URL, but this strategy solved more problems than it created.

Summary:
“Self registration” tools such as Biositemaps can be used to help human curators annotate a resource, including alerting curators that a resource has been created. However, while these tools can certainly help, we believe that these self-reporting tools do not replace trained human curators.