Deep Learning, learning completion, silicon chip implementation

  • Thread starter Thread starter kris kaczmarczyk
  • Start date Start date
  • Tags Tags
    Silicon
Click For Summary
SUMMARY

The discussion centers on the feasibility of hard-coding static coefficients derived from deep learning processes, specifically in applications like chess playing. Participants explore whether coefficients can stabilize at a certain error percentage, allowing them to be embedded into silicon chips for optimal performance. The conversation references the limitations of current technology, such as the two trillion transistors in chips, and the vast number of possible chess board positions (1e43), indicating that while progress is being made, significant challenges remain in achieving a perfect chess player through hardware implementation.

PREREQUISITES
  • Understanding of deep learning concepts, particularly backpropagation.
  • Familiarity with silicon chip architecture and transistor technology.
  • Knowledge of error percentage metrics in machine learning.
  • Basic principles of game theory as applied to chess.
NEXT STEPS
  • Research the implications of hard-coding coefficients in deep learning applications.
  • Explore advancements in silicon chip technology, focusing on transistor density and performance.
  • Learn about error minimization techniques in machine learning models.
  • Investigate the current state of AI in game theory, particularly in chess and other strategic games.
USEFUL FOR

AI researchers, hardware engineers, game developers, and anyone interested in the intersection of deep learning and silicon chip technology.

kris kaczmarczyk
Messages
14
Reaction score
0
TL;DR
Back propagation, learning process, coefficients at the hidden layer nodes. Hard-coded on the silicon?
After lengthy process of "deep learning" and back propagation of information; would we get static coefficients for the thousand of nodes; for example for playing game of chess, would that state be good to hard-wire on the silicon cheep and this way we would have perfect chess player?

Or the coefficients are always changing? In other words can we stop learning at certain stage of error percentage and than we have some set of numbers which we can hard-code on to the hardware (driving cars, translating, painting)?

https://www.popularmechanics.com/technology/design/a28816626/worlds-largest-computer-chip/
 
Technology news on Phys.org
https://en.wikipedia.org/wiki/Solving_chess#Predictions_on_when/if_chess_will_be_solved

two trillion (2e12 since the article is inch-based it must come from one of those countries) transistors sill isn't much if you have to deal with 1e43 board positions...

But it'll be a nice step forward

You would need a lot of those chips ! I wonder if anyone can make a Shannon-like guess ?

Note that one layer all over the surface of the world only gets you 1e16 chips
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
4K
Replies
3
Views
6K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 7 ·
Replies
7
Views
31K