Neural net processor Vs microprocessor of VLSI tech

AI Thread Summary
Neural net processors differ from traditional microprocessors based on VLSI technology primarily in their approach to problem-solving, emphasizing adaptive learning and recognition over sequential processing. Neural networks are often implemented in software and firmware, making them platform-independent, while hardware implementations like FPGAs offer flexibility but come with significant programming overhead. FPGAs can prototype various VLSI hardware functions, potentially replacing multiple application-specific integrated circuits (ASICs) with a single device. However, the complexity and compatibility issues associated with integrating FPGAs into PCs limit their widespread adoption. Ultimately, while neural networks present innovative solutions, traditional methods remain more practical for many applications.
probableexist
Messages
17
Reaction score
0
What will be the technology difference between neural net based processor and the Microprocessor based on VLSI technology we use today?
 
Engineering news on Phys.org
Very Large Scale Integration (VLSI):

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into billions of transistors.

Taken From: http://en.wikipedia.org/wiki/Very-large-scale_integration

The phrase "neural networks" (NN) has been bounced around for quite a while and is more of a "catch phrase" than a precise reference to any particular semiconductor topology. In general the ideas behind NN revolve around "adaptive learning" and "recognition". While some studies have focused on discrete hardware approaches, typical modern approaches focus on software and firmware, making them relatively platform independent. In many ways NN are a subset of Artificial Intelligence (AI), that is, NN seek to solve problems intuitively rather than sequentially or algorithmically.

From a strictly hardware point-of-view, probably the best example of what NN might "look like" would be Field Programmable Gate Arrays (FPGA). FPGAs consist of hundreds to hundreds of thousands of "Programmable Logic Elements" (PLE), and in some cases dedicated RAM, microprocessors, multipliers etc. While these devices are typically used to "prototype" VLSI hardware, they could easily be integrated into a PC either as a "co-processor" or even as the primary CPU. Numerous "Application Specific Integrated Circuits" (ASIC) could be replaced with a single FPGA. For instance, various Digital Signal Processors, cryptographic engines or Advanced Math Processors could all be synthesized and placed on an on-board FPGA to dramatically decrease the processing time a standard CPU would require for a particular task.

On the list of reasons PCs do NOT have FPGA to accomplish these tasks is the huge programming overhead involved compared to the very small number of users who might benefit from having it available. Even migrating Windows from 16-bit to 32-bit to 64-bit creates HUGE log jams in driver development, and compatibility issues keep rearing their ugly heads. Adding an FPGA to mobo's has the potential to create a serious SNAFU, for instance, what if the user wants to run two programs that utilize some of the same resources on the FPGA? Which one wins? Certainly not the user!

Playing with an FPGA evaluation board is fun and very educational. In almost no time you can load a micro controller and test your latest firmware changes, then a moment later load a synthesized video card and stream video to it, but as versatile as it is, this functionality comes with an enormous programming overhead that your average user just doesn't need or want. This appears to be the case with most pursuits into NN; that is, while the idea of having an adaptive system that is a general solution for many things seems appealing, in most cases, more traditional "firm" approaches to individual problems is more practical.

Fish
 
Very basic question. Consider a 3-terminal device with terminals say A,B,C. Kirchhoff Current Law (KCL) and Kirchhoff Voltage Law (KVL) establish two relationships between the 3 currents entering the terminals and the 3 terminal's voltage pairs respectively. So we have 2 equations in 6 unknowns. To proceed further we need two more (independent) equations in order to solve the circuit the 3-terminal device is connected to (basically one treats such a device as an unbalanced two-port...
suppose you have two capacitors with a 0.1 Farad value and 12 VDC rating. label these as A and B. label the terminals of each as 1 and 2. you also have a voltmeter with a 40 volt linear range for DC. you also have a 9 volt DC power supply fed by mains. you charge each capacitor to 9 volts with terminal 1 being - (negative) and terminal 2 being + (positive). you connect the voltmeter to terminal A2 and to terminal B1. does it read any voltage? can - of one capacitor discharge + of the...
Thread 'Weird near-field phenomenon I get in my EM simulation'
I recently made a basic simulation of wire antennas and I am not sure if the near field in my simulation is modeled correctly. One of the things that worry me is the fact that sometimes I see in my simulation "movements" in the near field that seems to be faster than the speed of wave propagation I defined (the speed of light in the simulation). Specifically I see "nodes" of low amplitude in the E field that are quickly "emitted" from the antenna and then slow down as they approach the far...
Back
Top