How to know the runtime of triggers

  • Thread starter Thread starter ChrisVer
  • Start date Start date
  • Tags Tags
    Runtime
Click For Summary
SUMMARY

This discussion focuses on measuring the runtime of triggers in data acquisition systems, specifically within the context of the ATLAS experiment. It highlights the distinction between software-based triggers, such as Level 2 and Event Filter, which can be tested using simulated data to measure processing times, and hardware triggers like Level 1, which require physical simulation and direct measurement techniques. The discussion emphasizes using oscilloscopes to observe the timing of hardware interrupts and suggests employing RS flip-flops to track the completion of hardware routines. The critical timing benchmarks mentioned include 50ms/event for Level 2 and 2.5μs for Level 1.

PREREQUISITES
  • Understanding of ATLAS trigger levels and their functions
  • Familiarity with hardware simulation techniques
  • Knowledge of using oscilloscopes for timing measurements
  • Basic electronics concepts, including RS flip-flops and hardware interrupts
NEXT STEPS
  • Research techniques for simulating hardware triggers in data acquisition systems
  • Learn about the implementation and measurement of hardware interrupts
  • Explore the use of oscilloscopes for precise timing analysis in electronics
  • Investigate performance benchmarks for software-based triggers in high-energy physics experiments
USEFUL FOR

Engineers and researchers involved in high-energy physics experiments, particularly those working with data acquisition systems and trigger mechanisms, will benefit from this discussion.

ChrisVer
Science Advisor
Messages
3,372
Reaction score
465
I am not if this belongs here or anywhere else, but ok...
Suppose we create an algorithm that does some online job during the data-acquisition of the run of a detector... How can we know whether our algorithm is slow for our needs?
For example ATLAS has some trigger levels and the Level 2 + Event Filter are software-based triggers... I guess that for software all you have to do is give it some fake data and let the PC measure its runtime, right? Then you know if it's able to process the # of data you want within your time limits. (eg I have read that L2 operates with 50ms/event)
But what about hardware triggers (like L1)? which of course takes all the "hard work", since it has all the events (40 evts/μs) and has to be done within 2.5μs.. Well, my problem is not how it can do it [just make it do a rough decision] but how we know that it operates at such times.
 
Technology news on Phys.org
You can simulate the hardware (you have to develop it...), and once you have it as physical hardware, you can also directly feed it simulated data on a hardware level to test it.
 
You can set an external bit when the routine is called, then clear it when it has been handled.
Watching that pin with an oscilloscope will show you the time needed to handle the event. Trigger on the leading edge.
For a hardware interrupt you can set an RS flip-flop with the hardware interrupt signal, then clear the F/F by software when done.
 
  • Like
Likes   Reactions: FactChecker

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 0 ·
Replies
0
Views
838
Replies
10
Views
5K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
29
Views
6K
Replies
45
Views
7K
Replies
1
Views
2K