Tags

Type your tag names separated by a space and hit enter

Assembling proteomics data as a prerequisite for the analysis of large scale experiments.
Chem Cent J. 2009 Jan 23; 3:2.CC

Abstract

BACKGROUND

Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments.

RESULTS

In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML.

CONCLUSION

The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk.

Authors+Show Affiliations

Max Planck Institute for Infection Biology, Core Facility Protein Analysis, Berlin, Germany. Frank.Schmidt@uni-greifswald.deNo affiliation info availableNo affiliation info availableNo affiliation info availableNo affiliation info availableNo affiliation info available

Pub Type(s)

Journal Article

Language

eng

PubMed ID

19166578

Citation

Schmidt, Frank, et al. "Assembling Proteomics Data as a Prerequisite for the Analysis of Large Scale Experiments." Chemistry Central Journal, vol. 3, 2009, p. 2.
Schmidt F, Schmid M, Thiede B, et al. Assembling proteomics data as a prerequisite for the analysis of large scale experiments. Chem Cent J. 2009;3:2.
Schmidt, F., Schmid, M., Thiede, B., Pleissner, K. P., Böhme, M., & Jungblut, P. R. (2009). Assembling proteomics data as a prerequisite for the analysis of large scale experiments. Chemistry Central Journal, 3, 2. https://doi.org/10.1186/1752-153X-3-2
Schmidt F, et al. Assembling Proteomics Data as a Prerequisite for the Analysis of Large Scale Experiments. Chem Cent J. 2009 Jan 23;3:2. PubMed PMID: 19166578.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Assembling proteomics data as a prerequisite for the analysis of large scale experiments. AU - Schmidt,Frank, AU - Schmid,Monika, AU - Thiede,Bernd, AU - Pleissner,Klaus-Peter, AU - Böhme,Martina, AU - Jungblut,Peter R, Y1 - 2009/01/23/ PY - 2008/10/10/received PY - 2009/01/23/accepted PY - 2009/1/27/entrez PY - 2009/1/27/pubmed PY - 2009/1/27/medline SP - 2 EP - 2 JF - Chemistry Central journal JO - Chem Cent J VL - 3 N2 - BACKGROUND: Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. RESULTS: In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. CONCLUSION: The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. SN - 1752-153X UR - https://www.unboundmedicine.com/medline/citation/19166578/Assembling_proteomics_data_as_a_prerequisite_for_the_analysis_of_large_scale_experiments_ L2 - https://dx.doi.org/10.1186/1752-153X-3-2 DB - PRIME DP - Unbound Medicine ER -
Try the Free App:
Prime PubMed app for iOS iPhone iPad
Prime PubMed app for Android
Prime PubMed is provided
free to individuals by:
Unbound Medicine.