- #1

kris kaczmarczyk

- 14

- 0

- TL;DR Summary
- Back propagation, learning process, coefficients at the hidden layer nodes. Hard-coded on the silicon?

After lengthy process of "deep learning" and back propagation of information; would we get

Or the coefficients are always changing? In other words can we stop learning at certain stage of error percentage and than we have some set of numbers which we can hard-code on to the hardware (driving cars, translating, painting)?

https://www.popularmechanics.com/technology/design/a28816626/worlds-largest-computer-chip/

__static coefficients__for the thousand of nodes; for example for playing game of chess, would that state be good to hard-wire on the silicon cheep and this way we would have perfect chess player?Or the coefficients are always changing? In other words can we stop learning at certain stage of error percentage and than we have some set of numbers which we can hard-code on to the hardware (driving cars, translating, painting)?

https://www.popularmechanics.com/technology/design/a28816626/worlds-largest-computer-chip/