What is HP's memory-driven computing?

  • Thread starter Thread starter ShayanJ
  • Start date Start date
  • Tags Tags
    Computing
Click For Summary
SUMMARY

HP's memory-driven computing technology focuses on optimizing computational speed by leveraging increased memory resources. The concept posits that while a program may require more memory, it can significantly reduce execution time, exemplified by a program needing 4GB of memory completing a task in half the time of a 1GB counterpart. This approach is not universally applicable, as the efficiency gain varies based on the specific program and its computational requirements. The discussion references the potential use of Linux as on-chip firmware and optical fiber buses to enhance performance.

PREREQUISITES
  • Understanding of memory-driven computing principles
  • Familiarity with computational complexity theory
  • Knowledge of Linux operating system functionalities
  • Insight into optical fiber communication technologies
NEXT STEPS
  • Research HP's memory-driven computing architecture
  • Explore computational complexity theory and its implications
  • Learn about Linux as on-chip firmware
  • Investigate optical fiber bus technologies and their applications
USEFUL FOR

This discussion is beneficial for computer scientists, software engineers, and IT professionals interested in advanced computing architectures and performance optimization strategies.

ShayanJ
Science Advisor
Insights Author
Messages
2,802
Reaction score
605
I just read about HP's new computing technology, memory-driven computing. But I can't figure out the idea behind it. Can anyone explain about it?

Thanks

https://www.hpe.com/us/en/newsroom/news-archive/press-release/2016/11/1287610-hewlett-packard-enterprise-demonstrates-worlds-first-memory-driven-computing-architecture.html

http://www.preposterousuniverse.com/blog/2016/12/20/memory-driven-computing-and-the-machine/
 
Computer science news on Phys.org
If it is what I think it is, then the idea is that what you lose in memory resource you gain it in the time resource. So in simple words if a program is made to run using 1GB of memory and takes 1hour to complete, you can make a different program that will compute the same thing but using say 4GB of memory but this second program will take only 0.5hour to complete. The constants are not directly proportional, it depends on the program and what it computes, you might need 10x or even 100x the memory to drop the time required by a factor of 2 (half the time). And of course it is not guaranteed that this method works for all sorts of programs. There is a theorem in computational complexity theory about this but right now I can't remember the name of it or its exact formulation.
 
Last edited:
linux as on-chip firmware with optical fiber buses?
 

Similar threads

Replies
29
Views
6K
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
3
Views
5K
  • · Replies 15 ·
Replies
15
Views
3K
Replies
10
Views
5K
Replies
1
Views
267
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 1 ·
Replies
1
Views
12K