Event Driven Processing: Is a Standard Processor Redesign Possible?

  • Thread starter vladpaln
  • Start date
  • Tags
    Processor
In summary, CPUs are based on the Von Neumann architecture and instructions are fed into the processor based on the clock speed. IBMs new neural chip SyNAPSE is event driven, operating (using power) only when needed. A standard processor can't be redesigned to use event driven processing vs clock speeds.
  • #1
vladpaln
12
0
From what I understand all processors are based on the Von Neumann architecture and instructions are fed into the processor based on the clock speed. I've recently come across the IBMs new neural chip SyNAPSE. The chip is event driven, operating (using power) only when needed.

Can a standard processor be redesigned to use event driven processing vs clock speeds??
 
Engineering news on Phys.org
  • #2
vladpaln said:
From what I understand all processors are based on the Von Neumann architecture
No, DSP chips have a modified Von Neumann architecture where instruction and data fetches are simultaneous (data storage is separate from instruction storage)
and instructions are fed into the processor based on the clock speed.
true
I've recently come across the IBMs new neural chip SyNAPSE. The chip is event driven, operating (using power) only when needed.

Can a standard processor be redesigned to use event driven processing vs clock speeds??
I don't see how. Standard processors run at clock speed pretty much by definition. You are proposing a change that I think is radical enough to warrant a different name, such as SyNAPSE. Besides, you really could only wake it up by event. You still have to RUN the processing of the event and that's going to be at clock speed if it's a standard processor. I think that might be what's happening with the SyNAPSE chip but I didn't really look into it so maybe not.
 
  • #3
Depends what you mean by "event driven".

Some hand held organisers (this is pre tablet computers) would go to sleep between key presses. So if you were writing an email pressing a key would wake up the processor, it would get the character from the keypad, put it into the file, update the display and go back to sleep until the next key was pressed. The display would be left powered up so you weren't aware that the processor was doing all this behind the scenes to save power.

This is as much a software issue as hardware. Obviously the hardware can be designed to reduce the overheads of going in and out of sleep mode but the software has to co-operate.
 
  • #4
CWatters said:
Some hand held organisers (this is pre tablet computers) would go to sleep between key presses. So if you were writing an email pressing a key would wake up the processor, it would get the character from the keypad, put it into the file, update the display and go back to sleep until the next key was pressed.

Can you give me an example (manufacturer, unit)??
 
  • #5
Even standard computers are to some extent event driven.
There are several hardware 'interrupts' built in, which when active cause the processor to suspend whatever it is doing, handle the interrupt as a priority, then resume the suspended task.
For most modern applications this is all handled by the operating system, but it's nevertheless possible to write machine code for the same chip which uses those interrupts and runs independently of an operating system.
As far as I know it's still a fairly commonplace programming technique for such things as device drivers and small single task embedded processors.
 
  • #6
vladpaln said:
Can you give me an example (manufacturer, unit)??

Unfortunately I only have definite knowledge about one make of organizer that did this and I signed an NDA that I suppose might still be enforceable. Sorry.
 
  • #7
CWatters said:
Some hand held organisers (this is pre tablet computers) would go to sleep between key presses. So if you were writing an email pressing a key would wake up the processor, it would get the character from the keypad, put it into the file, update the display and go back to sleep until the next key was pressed. The display would be left powered up so you weren't aware that the processor was doing all this behind the scenes to save power.
Nearly all computers have been doing this for the last 20 years. It's the reason why your CPU gets much hotter when it's under full load than when it's mostly idle. All Windows versions since 95 were designed to make full use of a processors ability to go into idle mode. The APIs of all modern OSs are designed to allow efficient event driven programming and applications with a graphical user interface are usually written in an event driven style. Event driven programming started to become very popular at the beginning of the 90s, when GUIs went mainstream. But it was already used before that e.g. in the first Macintosh or the Amiga computers in the middle of the 80s. And at Xerox Parc in the 70s.
 
Last edited:
  • #8
DrZoidberg said:
But it was already used before that e.g. in the first Macintosh or the Amiga computers in the middle of the 80s. And at Xerox Parc in the 70s.

Young man, I'll have you know it started in the 50s.
 
  • #9
anorlunda said:
Young man, I'll have you know it started in the 50s.
Well, yes, but normal users were little aware of it, if at all and there was no GUI-type programming such as is common today. Event-driven operations were things like a magnetic tape unit telling the CPU it was ready to read data, keyboards making similar announcements to the CPU, and so forth, and such operations were masked from the normal programmer (i.e. not a systems programmer) by the use of compiler level languages such as ALGOL or FORTRAN.

Those of us who programmed in assembly language back then, and even into the days of the DOS operating system, were quite aware of system interrupts and made varying degrees of use of them but they were considered very low-level operations.

Today it is quite normal for an ordinary programmer to make strong use of event driven operations when writing in development environments for high level languages. That is a significant evolution over what was done in the 50's and 60's
 
Last edited:
  • Like
Likes jim mcnamara
  • #10
Many microcontrollers have a Sleep instruction that shuts things down until some hardware or timer event wakes it up again.
There are often several levels of sleep available from short naps with a quick wakeup, to a deep sleep where the master clock may be stopped completely while a low frequency clock ticks away at about 32kHz on about 10uA.
 
  • #11
CWatters said:
Unfortunately I only have definite knowledge about one make of organizer that did this and I signed an NDA that I suppose might still be enforceable. Sorry.
How did the organizer handle network connection events?
 
  • #12
I think the processor had a wake on interrupt so anything that could generate an interrupt like a serial interface controller could wake it up.
 

1. What is event-driven processing?

Event-driven processing is a programming approach where the execution of a program is based on events or actions that occur, rather than a sequential flow of commands. Events can include user interactions, input from sensors, or other external triggers.

2. Why is a standard processor redesign necessary for event-driven processing?

A standard processor is designed for sequential execution of instructions, which is not efficient for event-driven processing. A redesign is necessary to optimize the processor for handling a large number of events and prioritizing them based on their significance.

3. What are the challenges in designing a standard processor for event-driven processing?

Some of the challenges include determining the optimal architecture for handling events, minimizing latency between event detection and processing, and ensuring efficient use of resources to handle multiple events simultaneously.

4. Can existing processors be adapted for event-driven processing?

It is possible to modify existing processors to some extent for event-driven processing, but a complete redesign is usually necessary to achieve optimal performance. This is because event-driven processing requires a different approach and architecture compared to traditional sequential processing.

5. How can event-driven processing benefit various industries and applications?

Event-driven processing can improve efficiency and performance in industries such as finance, healthcare, and manufacturing. It can also enhance user experience in applications such as smart homes, virtual assistants, and video games by responding to user actions in real-time.

Similar threads

  • Computing and Technology
2
Replies
38
Views
5K
  • Programming and Computer Science
Replies
14
Views
1K
Replies
2
Views
881
  • Computing and Technology
Replies
1
Views
4K
Replies
11
Views
2K
Replies
8
Views
877
Replies
4
Views
4K
Replies
62
Views
6K
  • STEM Academic Advising
Replies
13
Views
3K
Back
Top