Big Data and a Saturation Point

  • Thread starter Gear300
  • Start date
  • #1
1,179
5
I ask this with more of a software background than an engineering background, but here I go anyway.

Big data is arguably the cultural motif or monograph of the information age. Trends involve immersing ourselves in media of various sorts and processing them at exceptional rates. Of course, the information could be poisoned with false info, propaganda, virulent memes, and other redundancies, but at the very least, none of this is ahistorical. Other trends include the fourth paradigm in science, or data science; business compaction and migration to the cloud; streaming data, or data that exists in the network and not just at endpoints; and so on.

So the question here has to do with the growing demand and complexity of data. As it grows, it grates against our architectures and algorithms. The epitome of classical computing lies with parallel architectures, both horizontal and vertical, e.g. microservices in industry solutions. Big cloud vendors run on networks with a bisection bandwidth on the scale of petabytes, and they do what they can to optimize data storage and processing. But is there a credible futurist or tech-economist who expects there to be a saturation point in the growing demand for data? Sort of like one that prefigures the intermittency of classical computing in the early information era :biggrin:?
 
Last edited:

Answers and Replies

  • #2
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,345
6,541
Depends on how you define "saturation point".

Personally, I find it hard to see why there would be. Even if you have a totally serial process with scadzillions of data points, given the speed of modern processors it would have to be an awfully slow/complex algorithm for it to take more time that it is worth to run. Worst case, it seems to me, would be that you have to use a statistically meaningful subset of the data.
 
  • Like
Likes russ_watters
  • #3
1,179
5
I kind of figured that was the case, but thought it was worth asking. Do you know of any incumbent pressures on our current computing, current or expected, that could be listed?
 
  • #4
phinds
Science Advisor
Insights Author
Gold Member
2019 Award
16,345
6,541
Not in terms of data. As for algorithms, there are many for which it is believed that rapid solution will be possible only after we have much more robust quantum computing systems than are currently available. Solution by current non-quantum processors is completely out of the question.
 
  • #5
1,179
5
Alright. I guess I could read up on the rest. Thanks for the reply.
 

Related Threads on Big Data and a Saturation Point

Replies
5
Views
680
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
3
Views
3K
  • Last Post
Replies
2
Views
2K
Replies
2
Views
3K
  • Last Post
Replies
4
Views
3K
  • Last Post
Replies
7
Views
3K
Replies
4
Views
994
Replies
2
Views
3K
Replies
5
Views
3K
Top