来自维基百科对大数据的定义
Big data-From Wikipedia
In information technology, big data[1][2] is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The challenges include capture, curation, storage,[3] search, sharing, analysis,[4] and visualization. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to "spot business trends, determine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time roadway traffic conditions."[5][6][7]
在信息技术中,“大数据”是指一些使用目前现有数据库管理工具或传统数据处理应用很难处
理的大型而复杂的数据集。其挑战包括采集、管理、存储、搜索、共享、分析和可视化。更大的数据集的趋势是由于从相关数据的单一大数据集推导而来的额外信息,与分离的较小的具有相同数据总量的数据集相比,能够发现相关性来“识别商业趋势(spot business trends)、确定研究的质量、预防疾病、法律引用链接、打击犯罪以及实时确定道路交通状态”。
As of 2012, limits on the size of data sets that are feasible to process in a reasonable amount of time were on the order of exabytes of data.[8][9] Scientists regularly encounter limitations due to large data sets in many areas, including meteorology, genomics,[10] connectomics, complex physics simulations,[11] and biological and environmental research.[12] The limitations also affect Internet search, finance and business informatics. Data sets grow in size in part because they are increasingly being gathered by ubiquitous
information-sensing mobile devices, aerial sensory technologies (remote sensing), software logs, cameras, microphones, radio-frequency identification readers, and wireless sensor networks.[13][14] The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[15] as of 2012, every day quintillion (×1018) bytes of data were created.[16]
截至2012年,数据集大小尺寸的限制是exabyte数量级的数据,这种规模是指以可行的处理方式在合理的时间内进行数据处理。在许多领域科学家们经
来自维基百科对大数据的定义 来自淘豆网m.daumloan.com转载请标明出处.