Big Data and a Saturation Point

In summary, big data is a cultural motif or monograph of the information age that discusses the increasing demand and complexity of data. Trends include the fourth paradigm in science, or data science; business compaction and migration to the cloud; streaming data, or data that exists in the network and not just at endpoints; and so on. There is no saturation point in the growing demand for data, but incumbent pressures on our current computing systems could be listed.
  • #1
Gear300
1,213
9
I ask this with more of a software background than an engineering background, but here I go anyway.

Big data is arguably the cultural motif or monograph of the information age. Trends involve immersing ourselves in media of various sorts and processing them at exceptional rates. Of course, the information could be poisoned with false info, propaganda, virulent memes, and other redundancies, but at the very least, none of this is ahistorical. Other trends include the fourth paradigm in science, or data science; business compaction and migration to the cloud; streaming data, or data that exists in the network and not just at endpoints; and so on.

So the question here has to do with the growing demand and complexity of data. As it grows, it grates against our architectures and algorithms. The epitome of classical computing lies with parallel architectures, both horizontal and vertical, e.g. microservices in industry solutions. Big cloud vendors run on networks with a bisection bandwidth on the scale of petabytes, and they do what they can to optimize data storage and processing. But is there a credible futurist or tech-economist who expects there to be a saturation point in the growing demand for data? Sort of like one that prefigures the intermittency of classical computing in the early information era :biggrin:?
 
Last edited:
Engineering news on Phys.org
  • #2
Depends on how you define "saturation point".

Personally, I find it hard to see why there would be. Even if you have a totally serial process with scadzillions of data points, given the speed of modern processors it would have to be an awfully slow/complex algorithm for it to take more time that it is worth to run. Worst case, it seems to me, would be that you have to use a statistically meaningful subset of the data.
 
  • Like
Likes russ_watters
  • #3
I kind of figured that was the case, but thought it was worth asking. Do you know of any incumbent pressures on our current computing, current or expected, that could be listed?
 
  • #4
Not in terms of data. As for algorithms, there are many for which it is believed that rapid solution will be possible only after we have much more robust quantum computing systems than are currently available. Solution by current non-quantum processors is completely out of the question.
 
  • #5
Alright. I guess I could read up on the rest. Thanks for the reply.
 
Back
Top