Moore's law, the experimental observation that the number of transistors in an electronic chip doubles every two years, 1 is simultaneously a result and a cause of the exponential developments in information technology, which have few parallels in other areas outside the computer industry. Rapid developments in biotechnology and in information technology have occurred in parallel, over the last half a century, at a rate unparalleled in any other field. The article also lists a number of application areas where the ability to use big data will become a key factor, including drug discovery, drug recycling, drug safety, functional and structural genomics, proteomics, pharmacogenetics, and pharmacogenomics, among others. This article offers a perspective of the relations that exist between the fields of big data and biotechnology, including the related technologies of artificial intelligence and machine learning and describes how data integration, data exploitation, and process optimization correspond to three essential steps in any future biotechnology project.
Future developments in this area depend, critically, on the ability of biotechnology researchers to master the skills required to effectively integrate their own contributions with the large amounts of information available in these databases. Developments in biotechnology are increasingly dependent on the extensive use of big data, generated by modern high-throughput instrumentation technologies, and stored in thousands of databases, public and private.