Neural net processor Vs microprocessor of VLSI tech

Click For Summary
SUMMARY

The discussion highlights the technological differences between neural net processors and traditional microprocessors based on Very Large Scale Integration (VLSI) technology. VLSI integrates thousands of transistors into a single chip, while neural networks (NN) focus on adaptive learning and recognition, often implemented in software rather than hardware. Field Programmable Gate Arrays (FPGAs) serve as a potential hardware representation of NN, offering flexibility to prototype various applications but facing challenges due to programming overhead and compatibility issues. The conversation concludes that while neural networks present an appealing adaptive solution, traditional methods remain more practical for most applications.

PREREQUISITES
  • Understanding of Very Large Scale Integration (VLSI) technology
  • Familiarity with neural networks and their applications in artificial intelligence
  • Knowledge of Field Programmable Gate Arrays (FPGAs) and their functionality
  • Basic concepts of microprocessors and their role in computing
NEXT STEPS
  • Explore the architecture and programming of Field Programmable Gate Arrays (FPGAs)
  • Research the implementation of neural networks in various software frameworks
  • Study the evolution and advancements in Very Large Scale Integration (VLSI) technology
  • Investigate the challenges and solutions in integrating FPGAs into existing computer architectures
USEFUL FOR

This discussion is beneficial for hardware engineers, software developers, AI researchers, and anyone interested in the comparative analysis of neural network processors and traditional microprocessors.

probableexist
Messages
17
Reaction score
0
What will be the technology difference between neural net based processor and the Microprocessor based on VLSI technology we use today?
 
Engineering news on Phys.org
Very Large Scale Integration (VLSI):

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into billions of transistors.

Taken From: http://en.wikipedia.org/wiki/Very-large-scale_integration

The phrase "neural networks" (NN) has been bounced around for quite a while and is more of a "catch phrase" than a precise reference to any particular semiconductor topology. In general the ideas behind NN revolve around "adaptive learning" and "recognition". While some studies have focused on discrete hardware approaches, typical modern approaches focus on software and firmware, making them relatively platform independent. In many ways NN are a subset of Artificial Intelligence (AI), that is, NN seek to solve problems intuitively rather than sequentially or algorithmically.

From a strictly hardware point-of-view, probably the best example of what NN might "look like" would be Field Programmable Gate Arrays (FPGA). FPGAs consist of hundreds to hundreds of thousands of "Programmable Logic Elements" (PLE), and in some cases dedicated RAM, microprocessors, multipliers etc. While these devices are typically used to "prototype" VLSI hardware, they could easily be integrated into a PC either as a "co-processor" or even as the primary CPU. Numerous "Application Specific Integrated Circuits" (ASIC) could be replaced with a single FPGA. For instance, various Digital Signal Processors, cryptographic engines or Advanced Math Processors could all be synthesized and placed on an on-board FPGA to dramatically decrease the processing time a standard CPU would require for a particular task.

On the list of reasons PCs do NOT have FPGA to accomplish these tasks is the huge programming overhead involved compared to the very small number of users who might benefit from having it available. Even migrating Windows from 16-bit to 32-bit to 64-bit creates HUGE log jams in driver development, and compatibility issues keep rearing their ugly heads. Adding an FPGA to mobo's has the potential to create a serious SNAFU, for instance, what if the user wants to run two programs that utilize some of the same resources on the FPGA? Which one wins? Certainly not the user!

Playing with an FPGA evaluation board is fun and very educational. In almost no time you can load a micro controller and test your latest firmware changes, then a moment later load a synthesized video card and stream video to it, but as versatile as it is, this functionality comes with an enormous programming overhead that your average user just doesn't need or want. This appears to be the case with most pursuits into NN; that is, while the idea of having an adaptive system that is a general solution for many things seems appealing, in most cases, more traditional "firm" approaches to individual problems is more practical.

Fish
 

Similar threads

Replies
11
Views
3K
Replies
1
Views
5K
Replies
13
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K