Google plus

Showing posts with label CERF. Show all posts
Showing posts with label CERF. Show all posts

Monday, January 23, 2017

CERF ELN Version 5.0 is here!

Well folks, CERF ELN 5.0 is finally ready for release. 



Read the press release here.


CERF 5.0 is the result of an enormous amount of hard work by many people and is the culmination of almost 2 years of work. This is the first version of CERF created by its new producer, Lab-Ally.

Job one was locating and moving the source code to an all-new, modern, agile and fully integrated development, build and support platform. This new platform will allow us to move forward more quickly after this initial release (or more accurately "re-boot"). Our next engineering task involved updating the core components that make the whole shebang work. JAVA, Tomcat , Open Office, MySQL and modern SSL certificates, plus dozens of other components and libraries that all needed to be updated.

Next came the hard part, refactoring all the code to get the updated components and build environment to work together and spit out a functioning product. This part took us months, but when the newest version was finally birthed, we liked what we saw.

Then we went on a graphics spree, refreshing and redesigning almost all of the icons and buttons, and adding support for GUI features (like Mac's full screen mode) that didn't exist when CERF was first created.

As we worked on the product and started using it for data management within our own organization, obvious priorities for new features, improvements and refinements started to emerge, as did the need to "comment out" certain older, buggy or deprecated features that we plan to circle back to later. The rate at which the team started to brainstorm new ideas began to accelerate. New feature requests poured into our request system (JIRA) and before long, an ambitious roadmap that stretches years into the future started to emerge.

CERF 5 focusses on shoring up the product's most powerful features: semantic metadata and semantic search, round trip editing, flexible import and export of data and the use of notes, tags and configurable ontologies to add meaning to your files. Several new search parameters were added and the default search parameter list was re-designed to make it easier to use. A new export tool was created and a new version of the Automaton (formerly the "Automation Client" was built. Lab-Ally has also redoubled efforts to prioritize product quality, speed and and stability and is also putting much more emphasis on clear and complete documentation as well as compliance with industry-standard security tools like MS and Apple code signing, (which previously had been largely ignored). looking into the future, CERF will become increasingly focussed on the needs of GLP or "spirit of GLP" labs, with full support for things like ALCOA and related documentation principles.

The last piece in the puzzle was tracking down the code for the iPad App and redesigning it to work with modern iPads and to comply with Apple's more stringent code and security standards.  Honestly The iPad app was never much more than a prototype when it was first released in 2010 or 2011, so it took quite a lot of effort to finally get the new version to the point where we were comfortable releasing it. We called it iCERF and it's available on the itunes store now.

We are happy with the results and we think CERF is well positioned to take advantage of a growing demand for a full-featured Electronic Lab Notebook and 21CFR11 compliant document management system that can be installed on-site. The cloud may be popular for many sorts of data storage, but when it comes to mission critical, irreplaceable intellectual property, the smart organizations are getting tired of huge corporations holding their data hostage on the cloud where we all know that the US and Chinese governments will probably rummage through it any time they like.

If you want a free demo of this newest version, please contact lab-ally.

Monday, July 11, 2011

It's all just semantics.

The problem with computers is that they are, as we used to say back in the UK, "all face and no trousers". Computers can hold huge amounts of data and can quickly search for target character strings or numeric values, but ultimately they have no idea what any of the data actually mean. This is problematic when dealing with a data management program that centralizes information pouring in from many scientists. If I want to find something that I know I at one time contributed to an ever-increasing mountain of data, I can search for a word or value that I know I included when I created it. However, If I want to find something someone else created, then I have a problem because I don’t necessarily know any of the words or numeric values that they included and they may no longer be available to ask. Additionally, I won’t necessarily recognize search results as useful based on a file name, image thumbnail or some other preview that is the result of a full text search. When a scientist asks another scientist an ambiguous question, we, as humans, can respond in a uniquely human way by saying something like “what do you mean?” Poor, dumb computers on the other hand can never know “what you mean”, because they don’t understand meaning.

A good ELN system finds ways to associate rich meaning with data to make it easier to find information and build upon past laboratory research. One way to do this is to associate
metadata with raw data files, which help to identify the data’s origin, meaning and relationships. Some types of metadata are common and well known – keywords for example, or the “tags” that so many of us use to identify our friends in facebook pictures. Savvy organizations understand the importance of metadata and may insist that information such as sample numbers, project IDs and grant numbers be associated with all data to make it easier to gather and find later, but this kind of metadata still assumes that users know and follow established conventions and naming schemes. A good ELN can go one step further. Using technologies such as OWL and RDF, a good ELN can associate semantic metadata with objects using pre-defined ontologies that can anticipate how humans make meaning of data. Think of an ontology as a set of related, hierarchic terms with increasingly specific meanings. For example, the term “Autoimmune Disease” might can be subdivided into a full list of 100 different examples (Addison’s disease, Alopecia, Arthritis, Allergies etc. etc.) and some of these second level terms might be subdivided into third level terms (Allergies include Hay Fever, Penicillin Allergy, etc). If you perform an experiment related to Hay Fever, and you associate the semantic metadata label “Hay Fever” with that experiment, then later a colleague searches for “Autoimmune Disease”, a good ELN is smart enough to include the Hay Fever experiment in the search results because it “knows” that Hay Fever is a type of Autoimmune Disease, even thought the experiment does not anywhere contain that exact phrase.


By using industry standard or carefully constructed custom ontologies that make sense for a particular organization, downstream searching and gathering of knowledge assets can be greatly facilitated because a good ELN understands what kinds of resources you are looking for even if you do not know anything about the specific text or content of those resources. good ELN can also automate the initial association of metadata with certain objects so that the scientist can spend less time manually categorizing their data, and more time performing research. In sense, good ELN can be trained to understand what a particular file “means”, bringing the ELN one step closer to the goal of behaving as though it were a real human lab assistant, albeit one that never requests a vacation or asks for a pay raise. There's really only one ELN on the market that leverages modern semantic technologies, and that's CERF.  learn more at http://cerf-notebook.com