• Register
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X

Leaving Community

Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.

No
Yes
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

How long does it take to get a resource into NIF? The case of the open source brain.

Believe it or not, there really is a project called open source brain, and it is a wonderful community of hackers that attempts to do very novel things with open source models, mainly in a format called NeuroML.

What is the open source brain?

Well, it takes models, converts them into cool visualizations and then allows users to manipulate them in their browser, with functionality similar to google body. The hope is to strap some significant computational power from the Neuroscience Gateway's massive clusters so that the pretty pictures can be fully functional, but for now, this is a great way of exploring three-dimensional neurons and connectivity.

Screen Shot 2013-06-04 at 5.29.01 PM

But the reason I am blogging about this project is not because of the "ooohh-aaaahhh" factor that nice graphics usually have on me, but also because this source came to NIF in an interesting way, by human flying from London on his way to another meeting. Unfortunately last week we did not know about the Open Source Brain, but Padraig knew about NIF and wanted to register the project, hoping to integrate his data or at least "get the process started".

At 10:30 am we were sufficiently caffeinated to begin and created a registry entry, from which we obtained an identifier.

The identifier was then used to create a sitemap entry in the DISCO database (essentially anyone who has logged in to the NeuroLex can click a button at the bottom of a curated registry entry can actually do this).

Then we added an "interop" file, which instructs our crawler to put data the xml data output by open source brain into our local data warehouse making sure to specify appropriate tables and columns.

Then we went to lunch, came back after fighting much larger crowds at the indian place than were expected before finals, and created the "view" of the data (basically, wrote a sql statement and used our concept mapping tool to define what data would be displayed).

By 3:30 pm we had a view deployed. Well ok, we did have to import the data twice because we messed up the file once, and this deployment was the beta server and we had to wait to update to production until Friday night, but that is still pretty darn fast in my opinion.

The question for many people who have data has been how much effort will it take to make my data interoperable with community resources and for the first time ever, we can report .... it will only take a couple of hours (we should insert many caveats here).

X

Are you sure you want to delete that component?