Google plus

Showing posts with label ELN. Show all posts
Showing posts with label ELN. Show all posts

Tuesday, November 19, 2024

Coming in CERF 6: Improved support for using custom apps to perform mission-critical tasks and analyses on files stored in CERF.

Imagine you are a research organization that works with data files in some specialized format. A genetics lab working with GenBank .GBK or snapgene .DNA sequence files would be a good example. Now imagine your software engineers have written a custom app designed to perform some calculation or processing task on your data files, with the result or summary output to a new file.

Let's further imagine, that as a data manager, you need to have a good record of when any analyses where performed, who performed the analysis and precisely where the results of the analysis are stored. As it happens, this is a workflow that CERF ELN is very well pre-adapted to perform.

In most cases, users typically use CERF in conjunction with the default, industry standard applications on their local computer. An MS Word file, for example, may automatically open in MS Word, whilst your .DNA files may open in, say, snapgene. This workflow illustrates one of the unique advantages of a combined ELN and document management system that uses a desktop application to process your files. CERF carefully logs the interaction between the user and the files stored on the CERF server and displays all activity in the secure audit trail, so that managers are aware of current and past activity and access. In some cases, it may be advantageous to work with highly specialized applications that you've written yourself, designed specifically for performing specialized tasks on data that you stored in CERF. With CERF ELN, users can specify local applications on their computer that they would like to use to check out and edit specific file types. This allows users to optionally checkout files from CERF and open them in a non-default local application.

Lab-Ally has been working with bioinformatics students at the University of Maryland to create a toolbox of small accessory applications that can be used for processing various data files stored in CERF. Each academic term, as part of a capstone bioinformatics class, small groups of students (supervised by Lab-Ally) design, build and test an application of their choice. The application is designed to solve some common bioinformatics problem. An example is described below.

One team of 4 students recently built a GenBank extractor to make parsing genomic data easier and by utilizing this program you get a simplified output from the GenBank files that is readily compatible with CERF and the CERF search feature. The application can be used as a standalone tool or can be used integrated with CERF ELN to allow for superior record keeping, better efficiency and improved organization-wide collaboration.  This parser is designed to extract essential information from GenBank files and output a readable .rtf file.

What Does the GenBank Parser Do?

The parser extracts important data from GenBank files, such as:

  • Accession
  • Organism (Genus species)
  • Taxon data
  • Gene(s)
  • Genetic Sequence

It then organizes this data into an .rtf file, which is easy to read and compatible with most

platforms. Below is an example of what you will find in the output: 


RTF file showing various metadata retrieved from within a sequence file



How to Use the Parser as a Standalone Application


Install the Application:

  • Run the installation file on your computer
Launching the Application:
  • Navigate to the executable file of the program: go to C:\Program Files\GenBankParser and double-click on GenBankParser.exe.
  • A window should pop up with an "Open File" button.
Troubleshooting Display Issues:
  • If the window doesn’t show up properly, try resizing the window. Some users have experienced this issue, and resizing the window can often solve it.
Processing a GenBank File:
  • After clicking the Open File button, choose a .gb or .gbff GenBank file from your system.
  • The application will process the file and save an .rtf file to your desktop.

How to Use the Parser with CERF

  • If you’ve installed the parser, the next step is to configure CERF so it can summon files from the CERF server on demand and utilize the parser tool. Without this step, CERF would simply open GenBank files in whatever the default sequence editing application is on the user's local machine.
  • In CERF, navigate to Tools > Options > Applications.
  • Add the GenBankParser by pointing to the .exe in C:\Program Files\GenBankParser.
  • Set the MIME type to chemical/x-genbank. This helps CERF to understand what types of files you would like to open with the specified applicaiton

This is how Tools > Options > Applications should look once it's set up: 


CERF external application selector


Viewing GenBank Files with CERF:

  • Locate any .gb or .gbff file in CERF’s collections.
  • Right-click the file, select View-in, and choose GenBankParser from the list.
  • The parser will open, allowing you to process the file

CERF modified "View In..."  right-click options



File Output:

  • The application will process the file and save an .rtf file containing the results of the parser analysis to a specified local location.




Pasting as a Relation:

  • The file can then be dragged from the desktop into CERF, and specifically onto the associated file to have it pasted as a relation. This has the advantage that once added to CERF, the .rtf file is immediately indexed for searching so that users with the correct access permissions can search for target text that is located in the .rtf, and once they find THAT file, they can also locate the parent file containing the original raw sequence data.


New in CERF 6, the system will offer the option to automatically associate new files produced by custom applications (containing the results of some analysis) with the parent file containing the raw data. Since CERF offers outstanding version control, it will also be possible to perform these sorts of analysis with different versions of the original data file, associating the results with the correct version of the data in each case, and recording the entire process accurately in the CERF audit trail. We also hope to eventually offer this student-built parser and many other "add-on" tools for use with CERF on our website some time after the release of CERF 6 in 2025.

If you would like to see this tool in action or take a look at the code for the tool that these students built, or if you are a student or developer who would like to work with us to create additional tools like this genetic parser, we would love to hear from you. You can find contact information on the Lab-Ally website.



CERF ELN 5.3 is proving to be a workhorse, and CERF 6 is now well underway.


CERF 5.3 is another big step forward for Lab-Ally and CERF users everywhere. With the challenges scientists and labs everywhere experienced during the pandemic, CERF has become more valuable than ever, since the solution is uniquely pre-adapted for managing work-from-home scientists, technicians and other staff who need to access data and documents securely in ways that allow managers to retain outstanding, top-down awareness of who has been working on what, when they accessed the files, and what they changed.

Some of the new features added since CERF 5.0 include:

file viewer

View files and your folder hierarchy immediately in the center panel or in a new CERF window.

View files from notebooks in separate full-size windows to allow efficient comparison, examination and multitasking.

File viewer supports multiple windows to allow users to quickly compare any number of files.

File viewer is integrated with right-click throughout system to allow easy examination of versions, search results or any resource in CERF.

Better support for files of all types and viewing unsupported files using “official print copy” combined with the file viewer throughout the system.


search

More complete and logical columns to display more information about search results.

More useful search parameters with better organization. New parameters include signature status, activity status and edit status.

Export of results as .csv

More features associated with saved searches to make it easier to use them to generate reports.

Easier to reset searches.

Better options for learning more about characteristics and location of items in search results.


usability

Parity of features inside and outside notebooks

Extended list of template files to allow for creating content more easily in any location, including “standalone” text editor files in file cabinets and more.

Increased flexibility for making notebook entries of various types including plain text and RTF.

Menus, buttons, workflows, speed and stability reworked throughout.

New features to monitor, maintain and fix network connectivity and alert user to network problems.

Ability to instantly export .csv summaries from various locations including file info, audit trail.

Improved performance, especially for mature servers storing many thousands of files, documents and collections.


print to PDF 

Numerous improvements to print to PDF process.


compliance and security

More complete information in audit trail with new section for access logs showing failed login attempts.

Hashing of resources in flight to prevent man-in-the-middle interception.

Improvements to controlled documents allowing for automated access to new items based on user acceptance of terms.

Exporter application for intuitive processing of exported xml files.


Full release notes for CERF 5.3 (current official release) can be tested by anyone - visit our website for instructions:


https://cerf-notebook.com/resources/getting-started-with-cerf-free-trial/


ongoing projects / CERF 6

Update to latest java, tomcat and mysql versions.

External helper apps for performing various actions on files stored within CERF.

Customizable user interface “themes” to allow users to control look and feel.


CERF 6 has of course turned into a massive project as we have jumped from Java version from 1.8 all the way to java 21. To give our customers maximum flexibility, we plan to support both the server and the client software on windows, mac and ubuntu environments with the server hosted either on-premise or on a private AWS instance managed by Lab-Ally. We also now offer customers TWO purchase options: Customers can choose our standard perpetual license, with an annual support plan, or to reduce up-front costs, customers can opt for an all-inclusive annual subscription model.



Monday, January 23, 2017

CERF ELN Version 5.0 is here!

Well folks, CERF ELN 5.0 is finally ready for release. 



Read the press release here.


CERF 5.0 is the result of an enormous amount of hard work by many people and is the culmination of almost 2 years of work. This is the first version of CERF created by its new producer, Lab-Ally.

Job one was locating and moving the source code to an all-new, modern, agile and fully integrated development, build and support platform. This new platform will allow us to move forward more quickly after this initial release (or more accurately "re-boot"). Our next engineering task involved updating the core components that make the whole shebang work. JAVA, Tomcat , Open Office, MySQL and modern SSL certificates, plus dozens of other components and libraries that all needed to be updated.

Next came the hard part, refactoring all the code to get the updated components and build environment to work together and spit out a functioning product. This part took us months, but when the newest version was finally birthed, we liked what we saw.

Then we went on a graphics spree, refreshing and redesigning almost all of the icons and buttons, and adding support for GUI features (like Mac's full screen mode) that didn't exist when CERF was first created.

As we worked on the product and started using it for data management within our own organization, obvious priorities for new features, improvements and refinements started to emerge, as did the need to "comment out" certain older, buggy or deprecated features that we plan to circle back to later. The rate at which the team started to brainstorm new ideas began to accelerate. New feature requests poured into our request system (JIRA) and before long, an ambitious roadmap that stretches years into the future started to emerge.

CERF 5 focusses on shoring up the product's most powerful features: semantic metadata and semantic search, round trip editing, flexible import and export of data and the use of notes, tags and configurable ontologies to add meaning to your files. Several new search parameters were added and the default search parameter list was re-designed to make it easier to use. A new export tool was created and a new version of the Automaton (formerly the "Automation Client" was built. Lab-Ally has also redoubled efforts to prioritize product quality, speed and and stability and is also putting much more emphasis on clear and complete documentation as well as compliance with industry-standard security tools like MS and Apple code signing, (which previously had been largely ignored). looking into the future, CERF will become increasingly focussed on the needs of GLP or "spirit of GLP" labs, with full support for things like ALCOA and related documentation principles.

The last piece in the puzzle was tracking down the code for the iPad App and redesigning it to work with modern iPads and to comply with Apple's more stringent code and security standards.  Honestly The iPad app was never much more than a prototype when it was first released in 2010 or 2011, so it took quite a lot of effort to finally get the new version to the point where we were comfortable releasing it. We called it iCERF and it's available on the itunes store now.

We are happy with the results and we think CERF is well positioned to take advantage of a growing demand for a full-featured Electronic Lab Notebook and 21CFR11 compliant document management system that can be installed on-site. The cloud may be popular for many sorts of data storage, but when it comes to mission critical, irreplaceable intellectual property, the smart organizations are getting tired of huge corporations holding their data hostage on the cloud where we all know that the US and Chinese governments will probably rummage through it any time they like.

If you want a free demo of this newest version, please contact lab-ally.

Wednesday, November 4, 2015

RSpace ELN Community Edition is now live.

The free version of the new RSpace Electronic Lab Notebook is now live! The system focusses on the needs of academic organizations and features Lab / PI centric data organization, consolidation of all the files you keep in external cloud systems like Box, Dropbox and Google Drive, easy collaboration within or between labs, and instant, flexible export of some or all of your data as a coherent and well organized bundle in your preferred format. RSpace is more than just Lab Notebook Software, it's also designed to be a digital hub and a vital part of your complete Research Data Management (RDM) strategy.

Try it for yourself  by signing up here

Did I mention that it's FREE?



Friday, July 3, 2015

New RSpace Electronic Lab Notebook Videos

Lots of new RSpace ELN videos are now up on the RSpace YouTube Channel. If your lab or business is looking for a simple, powerful way to keep notes and collaboratively share data, then RSpace is for you. RSpace is the best browser based Electronic Lab Notebook solution available, but don't take my word for it, see for yourself on the RSpace video channel.

RSpace ELN videos for researchers

RSpace ELN videos for Principle Investigators (PIs)


Wednesday, February 18, 2015

Research Data Management (RDM) in the UK, factors affecting development of a coherent national strategy.

Research Data Management (RDM) should be a topic of discussion for all academic and government labs. How can data be coherently protected, archived, searched and made available in ways that further national and institutional research goals? How can (and should?) organizations co-operate to achieve these goals in a consistent way? Frankly, the US lags in this area because to some extent, most of the big academic research institutes see themselves as partial competitors and also because so many of the faculty tend to see themselves as independent agents rather than members of a national team striving towards some common goal for the greater good. In the UK, there tends to be more government sponsored co-ordination of national research goals, so the discussion of a national co-operative research policy appears to be more advanced, although not without its challenges. The article below appears with kind permission from my colleagues at Research Space, makers of the RSpace ELN, a solution that is focussed on the needs of institutional ELN deployments at large academic and government research organizations.
The original appears at:
http://www.dcc.ac.uk/blog/reflections-idcc15-why-road-broader-take-rdm-opening
The Digital Curation Centre (DCC) website at http://www.dcc.ac.uk is a great place to visit regularly for anyone with an interest in scientific data management strategies at the institutional and national levels.
The three fundamental factors influencing RDM take up
The ‘Why is it taking so long panel’ discussion touched on two themes that are crucial in understanding the kind of environment that is conducive to take up of RDM. Geoff Bilder repeatedly, and correctly in my view, hammered home the point that until the right infrastructure is in place you can’t expect researchers to be enthusiastic about engaging with RDM, in fact you can’t expect them to do it at all. 
Geoff pointed to a second, and in his view underlying, issue, namely funding -- without an appropriate funding model infrastructure will develop too slowly to support, and stimulate, take up of RDM. Geoff sees the problem as originating in the current funding model, which tries to squeeze infrastructure development out from grant funding.
Up to a point I also agree with this second strand of Geoff’s argument. But I would suggest that it’s possible to dig down and identify a third, even more fundamental, factor which lies beneath the funding conundrum. This is what could be termed ‘culture’, specifically researchers’ attitudes to RDM infrastructure and tools, and their views on RDM’s priority or lack of priority in the context of their broader need for support.
If researchers don’t view RDM as a priority they are not going to pressure funders or their host institutions to provide the necessary infrastructure and tools to make it possible. No amount of cajoling or encouraging is going to change that, and until recently the RDM community has mostly been in the position of fighting that uphill battle.
So by culture I mean an understanding on the part of researchers’ of the usefulness of a particular bit of infrastructure or tool, and a desire on their part to adopt or use it because they think it will benefit their research. I would argue that when the culture and the infrastructure or tool are there, funding will follow. Depending on the circumstances – the cost of the bit of infrastructure or tool, the institutional set up, budget, funding cycles, etc. -- this may take longer or happen more quickly, but it will happen. My amended picture of the three key factors driving RDM uptake is displayed in the following diagram.


Figure 1 Factors influencing RDM take up

The three constituencies in action:  ELNs at the Universities of Manchester and Edinburgh
A second point to understanding the critical minimum circumstances for RDM to be taken up, and to take off, is that RDM happens in particular institutions. We saw this point illustrated in the many presentations at #IDCC15 where people from a wide variety of institutions talked about RDM at their institution – how it is developing, challenges, progress, issues, etc. In each case the status and prospect of RDM is inseparable from the institutional environment.
Let me here make a second assertion  -- a culture conducive to RDM take up requires buy-in or actually enthusiastic support from three key constituencies in research institutions: researchers, IT managers and administrators – data librarians and research data administrators. Absent support from all three constituencies the culture will not develop, and without that the requisite pressure to find funding for RDM will not happen.



Figure 2 Three key constituencies in research institutions influencing RDM take up
To bring this point home I’ll end by recounting a recent personal experience. In January my colleague Richard Adams and I gave a talk at the University of Manchester about our RSpace ELN and in particular how it had been integrated into the RDM infrastructure at the University of Edinburgh. Our host Mary Mcderby had advertised the session in advance and, to our amazement, more than 60 people showed up: an equal mix of researchers, IT managers and research administrators. Even more amazingly, we were greeted like rock stars (ok, not quite like rock stars), and peppered by a volley of interested questions and comments from all three sections of the audience.
What became clear as the discussion progressed was not only that all three constituencies had an active interest in adopting an ELN, but that they were aware of each other’s interests and by and large seemed supportive of each other and happy to work together. That is what I mean by a culture conducive to RDM take up. I’m confident that Manchester will find a way, sooner rather than later, to adopt an institutional ELN, because (a) the will is there across all three constituencies, and, crucially (b) this is a tool that all three constituencies can see will bring benefits.
The road to a broadly dispersed RDM culture and sustainable funding models is opening up
Pioneering institutions like Manchester and Edinburgh may have to be a bit creative, and come up with innovative and ad hoc solutions, to fund take up of RDM infrastructure. But, as they are now beginning to show the way, pressure will grow on funders to put forward sustainable and well considered funding solutions that are replicable more broadly as the culture at other institutions develops to the point where the majority of research institutions find themselves in a position to follow in the footsteps of the pioneers.
Rory Macneil
Research Space

Wednesday, April 16, 2014

Integrating an ELN with a university's long term Datashare archive.

This article discusses a recent initiative at the University of Edinburgh to solve this problem by integrating the RSpace ELN with their DataShare long-term archiving system. Longevity of digital data is always an issue, and as more and more scientists make the jump to ELNs, the importance of archiving data for future generations is often overlooked. Imagine what a loss it would be if there were no equivalent to "Down House" (where Darwin's notes and data are kept).

http://datablog.is.ed.ac.uk/2014/04/15/using-an-electronic-lab-notebook-to-deposit-data/

Friday, September 28, 2012

Why deploying an ELN is worth the effort.


One of the linked articles below mentions a group who considered purchasing a commercial ELN system on several occasions. At least once, the research organization and the ELN vendor were "at the alter", waiting for management to sign on the dotted line. In the end, management balked because they could not see how they would realize immediate, obvious cost savings. The perceptions of management outweighed the needs of the scientists and apparently, the organization's commitment to research integrity. The scientists were eager to deploy the solution because their data management and research coordination efforts were, frankly, in disarray. I can't say that our product would definitely have prevented the serious problems this group subsequently ran into, but I can say that avoiding situations like these is what ELNs were designed for. I can also say that the "discovery" portion of any legal or regulatory proceeding will cost an affected organization a lot less, sometimes thousands or millions less, if they have the ability to search for and produce required documents and audit trail information efficiently. Without an ELN, organizations will need to pay a small army of data minors to go through every single paper notebook and document by hand until they locate needed information, and even then it may be legally flimsy, incomplete and easy to challenge in court, or unconvincing to a patent attorney or board of inquiry.

Contrary to what the sales team  of some products might tell you, ELN's are not really designed to save an enormous amount of money "up-front". Rather, they are designed to help improve long-term productivity, protect your investment by safeguarding data, prevent catastrophic data loss and errors of internal communication, identify WHERE problems have occurred after the fact, and reduce the negative consequences of missteps by making it easier to locate legally critical documents and data when they REALLY count.

Managers are sometimes more interested in their careers than they are in science. Notice that in the articles below, it is often a high ranking manager whose name appears in the article. If your managers want their careers to come to a screeching halt, by all means tell them that they don't need an ELN. When their name appears in an article like one of those below, they will realize their mistake as they clear their desk. I have spoken to many labs who have been prompted to look for an ELN product AFTER something bad has already happened. ELNs are not about what you can afford to pay, they are about what you can afford to LOSE or what kind of PR damage and risk your organization and your managers personally are prepared to tolerate.

Below are some other examples of things your managers do NOT want to happen at their research organization. These types of misstep are surprisingly common. They can and do happen in ANY lab, commercial, academic or government.


How a collapsing scientific hypothesis led to a lawsuit and arrest

Dutch 'Lord of the Data' Forged Dozens of Studies

Stanford Grad Adds Plagiarism to Gene-Modification IP Suit Against School, Professor

Sequenom Discloses Mishandling of Test Data; Shares Plummet

A Medical Madoff: Anesthesiologist Faked Data in 21 Studies

Scientific fraud news, articles and information

Scientific Journals Take Measures Against Inside Fraud

Scotts Miracle-Gro gets record penalties in pesticide case

Finally, take a look at the "hall of shame" link below. Notice in particular how MANY cases there are listed here. Are you taking steps to prevent the bad PR and monetary damages that could result from issues like these occurring at YOUR research organization or laboratory?


These are some examples of list you definitely do NOT what to find yourself or your lab mentioned: