Big Data and a Saturation Point

Click For Summary
Big data is a defining characteristic of the information age, with trends focusing on rapid media processing and the integration of data science into various sectors. As data complexity increases, it challenges existing computing architectures and algorithms, particularly in cloud environments that handle vast amounts of data. The discussion raises the question of whether a saturation point in data demand is foreseeable, suggesting that current processing capabilities may not reach a limit anytime soon. It is noted that even with extensive data points, modern processors can efficiently handle tasks unless algorithms become overly complex. The conversation also highlights the potential need for advancements in quantum computing to solve certain algorithmic challenges.
Gear300
Messages
1,209
Reaction score
9
I ask this with more of a software background than an engineering background, but here I go anyway.

Big data is arguably the cultural motif or monograph of the information age. Trends involve immersing ourselves in media of various sorts and processing them at exceptional rates. Of course, the information could be poisoned with false info, propaganda, virulent memes, and other redundancies, but at the very least, none of this is ahistorical. Other trends include the fourth paradigm in science, or data science; business compaction and migration to the cloud; streaming data, or data that exists in the network and not just at endpoints; and so on.

So the question here has to do with the growing demand and complexity of data. As it grows, it grates against our architectures and algorithms. The epitome of classical computing lies with parallel architectures, both horizontal and vertical, e.g. microservices in industry solutions. Big cloud vendors run on networks with a bisection bandwidth on the scale of petabytes, and they do what they can to optimize data storage and processing. But is there a credible futurist or tech-economist who expects there to be a saturation point in the growing demand for data? Sort of like one that prefigures the intermittency of classical computing in the early information era :biggrin:?
 
Last edited:
Engineering news on Phys.org
Depends on how you define "saturation point".

Personally, I find it hard to see why there would be. Even if you have a totally serial process with scadzillions of data points, given the speed of modern processors it would have to be an awfully slow/complex algorithm for it to take more time that it is worth to run. Worst case, it seems to me, would be that you have to use a statistically meaningful subset of the data.
 
  • Like
Likes russ_watters
I kind of figured that was the case, but thought it was worth asking. Do you know of any incumbent pressures on our current computing, current or expected, that could be listed?
 
Not in terms of data. As for algorithms, there are many for which it is believed that rapid solution will be possible only after we have much more robust quantum computing systems than are currently available. Solution by current non-quantum processors is completely out of the question.
 
Alright. I guess I could read up on the rest. Thanks for the reply.
 
What mathematics software should engineering students use? Is it correct that much of the engineering industry relies on MATLAB, making it the tool many graduates will encounter in professional settings? How does SageMath compare? It is a free package that supports both numerical and symbolic computation and can be installed on various platforms. Could it become more widely used because it is freely available? I am an academic who has taught engineering mathematics, and taught the...

Similar threads

Replies
10
Views
5K
  • · Replies 1 ·
Replies
1
Views
4K