Sub microsecond jitter accuracy - what to use?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
5
down vote

favorite
5












I have four TTL level signals that must change state in a particular order.



From the “go” signal on one of the lines, all toggling of the other lines will be done within 100 us. The other three signals will toggle at most a couple of times. The start, end and duration between state change that these other three signals will toggle varies depending upon metrics in the system that are know well before the go signal occurs. But the timing is variable so the sequence of pulses and their durations need to be programmable. The timing and duration is known before the sequence starts.



Here’s the rub, I need sub microsecond jitter accuracy. The 120 MHz ARM micro I’m using can’t guarantee such deterministic timing profiles due to pipelining and a host of other performance enhancing reasons. We can do our best to architect the system to minimize the jitter but I want to know if it’s use a faster micro or DSPs or CPLD, PALs, etc. are a typical way to get the accuracy and resolution I’m looking for.



In the past with an 8 bit micro running at 8 MHz with one instruction per clock cycle I could write some assembler, put the micro to sleep, wake on interrupt, count some clock cycles and have 0.25 us accuracy



What technology to I need to investigate to achieve this resolution and accuracy?










share|improve this question









New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 2




    FPGAs will do the trick
    – Lelesquiz
    Oct 1 at 12:47






  • 3




    It would help to have a spec or at least some examples of what signal have to be generate as a result of what other signals, with minimum and maximum allowable edge timing.
    – Olin Lathrop
    Oct 1 at 13:03






  • 3




    It would help if you can specify exactly which part you are using. By pipelining, do you actually mean branch prediction? As in, a part with cache memory might get indeterministic behavior when branch prediction fails. Particularly when flash & wait states is involved. Because pipelining shouldn't cause jitter, but rather just make the code faster overall. And pipelining has been around since long before ARM, so your old MCU probably had some flavour of it.
    – Lundin
    Oct 1 at 13:27







  • 4




    You start with a state machine with timing requirements on all inputs and outputs then sync to a stable clock to eliminate jitter, then choose a solution. Not the other way around. i.e. bottoms up then for other reasons, top-down to arrive a cost-effective solution
    – Tony EE rocketscientist
    Oct 1 at 13:52







  • 8




    "In the past with an 8 bit micro... " 8-bit micros still exist. Some of them are very fast. Why wouldn't you use one?
    – Dave Tweed♦
    Oct 1 at 14:12















up vote
5
down vote

favorite
5












I have four TTL level signals that must change state in a particular order.



From the “go” signal on one of the lines, all toggling of the other lines will be done within 100 us. The other three signals will toggle at most a couple of times. The start, end and duration between state change that these other three signals will toggle varies depending upon metrics in the system that are know well before the go signal occurs. But the timing is variable so the sequence of pulses and their durations need to be programmable. The timing and duration is known before the sequence starts.



Here’s the rub, I need sub microsecond jitter accuracy. The 120 MHz ARM micro I’m using can’t guarantee such deterministic timing profiles due to pipelining and a host of other performance enhancing reasons. We can do our best to architect the system to minimize the jitter but I want to know if it’s use a faster micro or DSPs or CPLD, PALs, etc. are a typical way to get the accuracy and resolution I’m looking for.



In the past with an 8 bit micro running at 8 MHz with one instruction per clock cycle I could write some assembler, put the micro to sleep, wake on interrupt, count some clock cycles and have 0.25 us accuracy



What technology to I need to investigate to achieve this resolution and accuracy?










share|improve this question









New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 2




    FPGAs will do the trick
    – Lelesquiz
    Oct 1 at 12:47






  • 3




    It would help to have a spec or at least some examples of what signal have to be generate as a result of what other signals, with minimum and maximum allowable edge timing.
    – Olin Lathrop
    Oct 1 at 13:03






  • 3




    It would help if you can specify exactly which part you are using. By pipelining, do you actually mean branch prediction? As in, a part with cache memory might get indeterministic behavior when branch prediction fails. Particularly when flash & wait states is involved. Because pipelining shouldn't cause jitter, but rather just make the code faster overall. And pipelining has been around since long before ARM, so your old MCU probably had some flavour of it.
    – Lundin
    Oct 1 at 13:27







  • 4




    You start with a state machine with timing requirements on all inputs and outputs then sync to a stable clock to eliminate jitter, then choose a solution. Not the other way around. i.e. bottoms up then for other reasons, top-down to arrive a cost-effective solution
    – Tony EE rocketscientist
    Oct 1 at 13:52







  • 8




    "In the past with an 8 bit micro... " 8-bit micros still exist. Some of them are very fast. Why wouldn't you use one?
    – Dave Tweed♦
    Oct 1 at 14:12













up vote
5
down vote

favorite
5









up vote
5
down vote

favorite
5






5





I have four TTL level signals that must change state in a particular order.



From the “go” signal on one of the lines, all toggling of the other lines will be done within 100 us. The other three signals will toggle at most a couple of times. The start, end and duration between state change that these other three signals will toggle varies depending upon metrics in the system that are know well before the go signal occurs. But the timing is variable so the sequence of pulses and their durations need to be programmable. The timing and duration is known before the sequence starts.



Here’s the rub, I need sub microsecond jitter accuracy. The 120 MHz ARM micro I’m using can’t guarantee such deterministic timing profiles due to pipelining and a host of other performance enhancing reasons. We can do our best to architect the system to minimize the jitter but I want to know if it’s use a faster micro or DSPs or CPLD, PALs, etc. are a typical way to get the accuracy and resolution I’m looking for.



In the past with an 8 bit micro running at 8 MHz with one instruction per clock cycle I could write some assembler, put the micro to sleep, wake on interrupt, count some clock cycles and have 0.25 us accuracy



What technology to I need to investigate to achieve this resolution and accuracy?










share|improve this question









New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I have four TTL level signals that must change state in a particular order.



From the “go” signal on one of the lines, all toggling of the other lines will be done within 100 us. The other three signals will toggle at most a couple of times. The start, end and duration between state change that these other three signals will toggle varies depending upon metrics in the system that are know well before the go signal occurs. But the timing is variable so the sequence of pulses and their durations need to be programmable. The timing and duration is known before the sequence starts.



Here’s the rub, I need sub microsecond jitter accuracy. The 120 MHz ARM micro I’m using can’t guarantee such deterministic timing profiles due to pipelining and a host of other performance enhancing reasons. We can do our best to architect the system to minimize the jitter but I want to know if it’s use a faster micro or DSPs or CPLD, PALs, etc. are a typical way to get the accuracy and resolution I’m looking for.



In the past with an 8 bit micro running at 8 MHz with one instruction per clock cycle I could write some assembler, put the micro to sleep, wake on interrupt, count some clock cycles and have 0.25 us accuracy



What technology to I need to investigate to achieve this resolution and accuracy?







microcontroller jitter






share|improve this question









New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Oct 1 at 12:57









winny

4,35421726




4,35421726






New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Oct 1 at 12:40









ignoramusextraordinaire

293




293




New contributor




ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






ignoramusextraordinaire is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 2




    FPGAs will do the trick
    – Lelesquiz
    Oct 1 at 12:47






  • 3




    It would help to have a spec or at least some examples of what signal have to be generate as a result of what other signals, with minimum and maximum allowable edge timing.
    – Olin Lathrop
    Oct 1 at 13:03






  • 3




    It would help if you can specify exactly which part you are using. By pipelining, do you actually mean branch prediction? As in, a part with cache memory might get indeterministic behavior when branch prediction fails. Particularly when flash & wait states is involved. Because pipelining shouldn't cause jitter, but rather just make the code faster overall. And pipelining has been around since long before ARM, so your old MCU probably had some flavour of it.
    – Lundin
    Oct 1 at 13:27







  • 4




    You start with a state machine with timing requirements on all inputs and outputs then sync to a stable clock to eliminate jitter, then choose a solution. Not the other way around. i.e. bottoms up then for other reasons, top-down to arrive a cost-effective solution
    – Tony EE rocketscientist
    Oct 1 at 13:52







  • 8




    "In the past with an 8 bit micro... " 8-bit micros still exist. Some of them are very fast. Why wouldn't you use one?
    – Dave Tweed♦
    Oct 1 at 14:12













  • 2




    FPGAs will do the trick
    – Lelesquiz
    Oct 1 at 12:47






  • 3




    It would help to have a spec or at least some examples of what signal have to be generate as a result of what other signals, with minimum and maximum allowable edge timing.
    – Olin Lathrop
    Oct 1 at 13:03






  • 3




    It would help if you can specify exactly which part you are using. By pipelining, do you actually mean branch prediction? As in, a part with cache memory might get indeterministic behavior when branch prediction fails. Particularly when flash & wait states is involved. Because pipelining shouldn't cause jitter, but rather just make the code faster overall. And pipelining has been around since long before ARM, so your old MCU probably had some flavour of it.
    – Lundin
    Oct 1 at 13:27







  • 4




    You start with a state machine with timing requirements on all inputs and outputs then sync to a stable clock to eliminate jitter, then choose a solution. Not the other way around. i.e. bottoms up then for other reasons, top-down to arrive a cost-effective solution
    – Tony EE rocketscientist
    Oct 1 at 13:52







  • 8




    "In the past with an 8 bit micro... " 8-bit micros still exist. Some of them are very fast. Why wouldn't you use one?
    – Dave Tweed♦
    Oct 1 at 14:12








2




2




FPGAs will do the trick
– Lelesquiz
Oct 1 at 12:47




FPGAs will do the trick
– Lelesquiz
Oct 1 at 12:47




3




3




It would help to have a spec or at least some examples of what signal have to be generate as a result of what other signals, with minimum and maximum allowable edge timing.
– Olin Lathrop
Oct 1 at 13:03




It would help to have a spec or at least some examples of what signal have to be generate as a result of what other signals, with minimum and maximum allowable edge timing.
– Olin Lathrop
Oct 1 at 13:03




3




3




It would help if you can specify exactly which part you are using. By pipelining, do you actually mean branch prediction? As in, a part with cache memory might get indeterministic behavior when branch prediction fails. Particularly when flash & wait states is involved. Because pipelining shouldn't cause jitter, but rather just make the code faster overall. And pipelining has been around since long before ARM, so your old MCU probably had some flavour of it.
– Lundin
Oct 1 at 13:27





It would help if you can specify exactly which part you are using. By pipelining, do you actually mean branch prediction? As in, a part with cache memory might get indeterministic behavior when branch prediction fails. Particularly when flash & wait states is involved. Because pipelining shouldn't cause jitter, but rather just make the code faster overall. And pipelining has been around since long before ARM, so your old MCU probably had some flavour of it.
– Lundin
Oct 1 at 13:27





4




4




You start with a state machine with timing requirements on all inputs and outputs then sync to a stable clock to eliminate jitter, then choose a solution. Not the other way around. i.e. bottoms up then for other reasons, top-down to arrive a cost-effective solution
– Tony EE rocketscientist
Oct 1 at 13:52





You start with a state machine with timing requirements on all inputs and outputs then sync to a stable clock to eliminate jitter, then choose a solution. Not the other way around. i.e. bottoms up then for other reasons, top-down to arrive a cost-effective solution
– Tony EE rocketscientist
Oct 1 at 13:52





8




8




"In the past with an 8 bit micro... " 8-bit micros still exist. Some of them are very fast. Why wouldn't you use one?
– Dave Tweed♦
Oct 1 at 14:12





"In the past with an 8 bit micro... " 8-bit micros still exist. Some of them are very fast. Why wouldn't you use one?
– Dave Tweed♦
Oct 1 at 14:12











7 Answers
7






active

oldest

votes

















up vote
15
down vote













It's not clear what exactly the signals are you need to generate, and their timing relative to incoming signals.



However, 250 ns isn't really all that hard to achieve with something like a EP series dsPIC, for example. At 70 MHz instruction rate, that gives you up to 17 instruction cycles of allowable jitter. That's a lot.



Having your incoming signal cause a interrupt, then generating the output signals from fixed instruction timing will give you much less than 17 cycles of jitter. It would be even better if the input signal can trigger a PWM generator or the like. But, you haven't given enough information about the nature of the output signals to know whether specific hardware available on such micros would be applicable.






share|improve this answer




















  • Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
    – ignoramusextraordinaire
    Oct 1 at 15:44






  • 5




    ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
    – Kvegaoro
    Oct 1 at 18:19


















up vote
6
down vote













Usually you can achieve this kind of real-time as long as the output does not depend on software/interrupts. That is, pins aren't set from an ISR or similar, in which case you will have microsecond jitter. Interrupt latency might be a static time, but I wouldn't count on it, in case more than one interrupt fires at once etc.



You might be able to solve this with the output compare feature of the hardware timer. That is, all relevant pins are set when a timer elapses, like for example when using PWM. This can often be done with system clock or system clock/2 accuracy. Other alternatives are DMA, if supported for the specific pins.



This may work down to 50-100ns somewhere, where you'll be at the mercy of the analog characteristics of the pins.



And then of course, you won't be able to get better accuracy than your oscillator allows. You certainly can't use some built-in RC oscillator, but need to use a high accuracy crystal or external oscillator.






share|improve this answer



























    up vote
    5
    down vote













    DMA and a timer?



    A couple of SPI busses and just use the data pins (Possibly again with DMA if you need more then 32 time slots)?



    My feeling is that 1us should be well doable if you pick your IO pins correctly and are prepared to play a few low level games.



    100ns I would have to think about, but maybe something devious with loading up a QSPI ram chip then clocking out the bit pattern using a timer as the clock?



    10ns is FPGA territory.






    share|improve this answer



























      up vote
      4
      down vote













      Check if you have timers which are able to drive multiple pins, typically used for motor control (usually 4 or 6, for H-bridge and 3-phase respectively). In many cases, such timers have "preload" registers which allow you to seamlessly modify the period and the duty cycle, which means you can essentially generate an arbitrary waveform with them. If done right, such waveforms are precise down to the timer resolution.






      share|improve this answer
















      • 1




        Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
        – ignoramusextraordinaire
        Oct 1 at 16:43

















      up vote
      2
      down vote













      Maybe something like a Cypress PSOC with the programmable logic cells. I think Microchip has parts with similar capabilities. They're like microcontrollers with tiny, limited FPGA functionality that you can customize. It sounds like you will need some form of either DMA or FPGA to hit your requirements.






      share|improve this answer



























        up vote
        2
        down vote













        STM32 devices have very powerful and configureable timer peripherals. They can be chained or synchronized and offer cycle-accurate outputs. You may want to spend some time reading over the datasheets for the STM32L4 and H4 devices to start, and perhaps reviewing some of STMicro's timer specific documentation.



        I'm personally using the timers along with an FPGA to give me microsecond-accurate timing and sequencing for 32 digital outputs. The FPGA is not doing anything timing specific, but rather just MUXing the STM32's excellent and configurable timers to one of 32 outputs. The final hardware will eliminate the STM32 but for prototyping and development it can't be beat.






        share|improve this answer



























          up vote
          1
          down vote













          I'm doing something very similar at work with a DSP. I have an FPGA which carries out the precision timing part. I was hoping to use a square wave from the DSP to test the PLL for syncing two FPGAs. In fact I found that although the FPGA timing tolerances were set at around 0.01us based on clock tolerances, my DSP had too much else going on to do better than 0.1us.



          This is a pretty full-on processing loop though. With a less intensive loop, it could be better, of course. For the parts which really needed to be deterministic, running them at the very start of the loop can help. Do be warned though that although interrupt latency is predictable, it is very much non-zero! For my platform it's 91 clock ticks at 456MHz. I can easily get sub-us jitter from interrupts, but microsecond delays need the interrupt latency to be baked into the calculations.






          share|improve this answer




















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("schematics", function ()
            StackExchange.schematics.init();
            );
            , "cicuitlab");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "135"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );






            ignoramusextraordinaire is a new contributor. Be nice, and check out our Code of Conduct.









             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f398808%2fsub-microsecond-jitter-accuracy-what-to-use%23new-answer', 'question_page');

            );

            Post as a guest






























            7 Answers
            7






            active

            oldest

            votes








            7 Answers
            7






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            15
            down vote













            It's not clear what exactly the signals are you need to generate, and their timing relative to incoming signals.



            However, 250 ns isn't really all that hard to achieve with something like a EP series dsPIC, for example. At 70 MHz instruction rate, that gives you up to 17 instruction cycles of allowable jitter. That's a lot.



            Having your incoming signal cause a interrupt, then generating the output signals from fixed instruction timing will give you much less than 17 cycles of jitter. It would be even better if the input signal can trigger a PWM generator or the like. But, you haven't given enough information about the nature of the output signals to know whether specific hardware available on such micros would be applicable.






            share|improve this answer




















            • Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
              – ignoramusextraordinaire
              Oct 1 at 15:44






            • 5




              ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
              – Kvegaoro
              Oct 1 at 18:19















            up vote
            15
            down vote













            It's not clear what exactly the signals are you need to generate, and their timing relative to incoming signals.



            However, 250 ns isn't really all that hard to achieve with something like a EP series dsPIC, for example. At 70 MHz instruction rate, that gives you up to 17 instruction cycles of allowable jitter. That's a lot.



            Having your incoming signal cause a interrupt, then generating the output signals from fixed instruction timing will give you much less than 17 cycles of jitter. It would be even better if the input signal can trigger a PWM generator or the like. But, you haven't given enough information about the nature of the output signals to know whether specific hardware available on such micros would be applicable.






            share|improve this answer




















            • Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
              – ignoramusextraordinaire
              Oct 1 at 15:44






            • 5




              ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
              – Kvegaoro
              Oct 1 at 18:19













            up vote
            15
            down vote










            up vote
            15
            down vote









            It's not clear what exactly the signals are you need to generate, and their timing relative to incoming signals.



            However, 250 ns isn't really all that hard to achieve with something like a EP series dsPIC, for example. At 70 MHz instruction rate, that gives you up to 17 instruction cycles of allowable jitter. That's a lot.



            Having your incoming signal cause a interrupt, then generating the output signals from fixed instruction timing will give you much less than 17 cycles of jitter. It would be even better if the input signal can trigger a PWM generator or the like. But, you haven't given enough information about the nature of the output signals to know whether specific hardware available on such micros would be applicable.






            share|improve this answer












            It's not clear what exactly the signals are you need to generate, and their timing relative to incoming signals.



            However, 250 ns isn't really all that hard to achieve with something like a EP series dsPIC, for example. At 70 MHz instruction rate, that gives you up to 17 instruction cycles of allowable jitter. That's a lot.



            Having your incoming signal cause a interrupt, then generating the output signals from fixed instruction timing will give you much less than 17 cycles of jitter. It would be even better if the input signal can trigger a PWM generator or the like. But, you haven't given enough information about the nature of the output signals to know whether specific hardware available on such micros would be applicable.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Oct 1 at 13:08









            Olin Lathrop

            279k28331785




            279k28331785











            • Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
              – ignoramusextraordinaire
              Oct 1 at 15:44






            • 5




              ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
              – Kvegaoro
              Oct 1 at 18:19

















            • Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
              – ignoramusextraordinaire
              Oct 1 at 15:44






            • 5




              ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
              – Kvegaoro
              Oct 1 at 18:19
















            Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
            – ignoramusextraordinaire
            Oct 1 at 15:44




            Hi Olin - I'm planning on using the Atmel ATSAMD51 and right now no system resources are allocated. The highest priority are these four signals so I can assign resources as necessary. The output signals are connected to FET drivers.
            – ignoramusextraordinaire
            Oct 1 at 15:44




            5




            5




            ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
            – Kvegaoro
            Oct 1 at 18:19





            ATSAMD51 line of micro controllers have an event system peripheral that allows autonomous, low-latency and configurable communication between peripherals.Several peripherals can be configured to generate and/or respond to signals known as events. This could fit the bills for your application. It also contains a configurable custom logic block that could work for your application by providing minimal interaction to the mcu core
            – Kvegaoro
            Oct 1 at 18:19













            up vote
            6
            down vote













            Usually you can achieve this kind of real-time as long as the output does not depend on software/interrupts. That is, pins aren't set from an ISR or similar, in which case you will have microsecond jitter. Interrupt latency might be a static time, but I wouldn't count on it, in case more than one interrupt fires at once etc.



            You might be able to solve this with the output compare feature of the hardware timer. That is, all relevant pins are set when a timer elapses, like for example when using PWM. This can often be done with system clock or system clock/2 accuracy. Other alternatives are DMA, if supported for the specific pins.



            This may work down to 50-100ns somewhere, where you'll be at the mercy of the analog characteristics of the pins.



            And then of course, you won't be able to get better accuracy than your oscillator allows. You certainly can't use some built-in RC oscillator, but need to use a high accuracy crystal or external oscillator.






            share|improve this answer
























              up vote
              6
              down vote













              Usually you can achieve this kind of real-time as long as the output does not depend on software/interrupts. That is, pins aren't set from an ISR or similar, in which case you will have microsecond jitter. Interrupt latency might be a static time, but I wouldn't count on it, in case more than one interrupt fires at once etc.



              You might be able to solve this with the output compare feature of the hardware timer. That is, all relevant pins are set when a timer elapses, like for example when using PWM. This can often be done with system clock or system clock/2 accuracy. Other alternatives are DMA, if supported for the specific pins.



              This may work down to 50-100ns somewhere, where you'll be at the mercy of the analog characteristics of the pins.



              And then of course, you won't be able to get better accuracy than your oscillator allows. You certainly can't use some built-in RC oscillator, but need to use a high accuracy crystal or external oscillator.






              share|improve this answer






















                up vote
                6
                down vote










                up vote
                6
                down vote









                Usually you can achieve this kind of real-time as long as the output does not depend on software/interrupts. That is, pins aren't set from an ISR or similar, in which case you will have microsecond jitter. Interrupt latency might be a static time, but I wouldn't count on it, in case more than one interrupt fires at once etc.



                You might be able to solve this with the output compare feature of the hardware timer. That is, all relevant pins are set when a timer elapses, like for example when using PWM. This can often be done with system clock or system clock/2 accuracy. Other alternatives are DMA, if supported for the specific pins.



                This may work down to 50-100ns somewhere, where you'll be at the mercy of the analog characteristics of the pins.



                And then of course, you won't be able to get better accuracy than your oscillator allows. You certainly can't use some built-in RC oscillator, but need to use a high accuracy crystal or external oscillator.






                share|improve this answer












                Usually you can achieve this kind of real-time as long as the output does not depend on software/interrupts. That is, pins aren't set from an ISR or similar, in which case you will have microsecond jitter. Interrupt latency might be a static time, but I wouldn't count on it, in case more than one interrupt fires at once etc.



                You might be able to solve this with the output compare feature of the hardware timer. That is, all relevant pins are set when a timer elapses, like for example when using PWM. This can often be done with system clock or system clock/2 accuracy. Other alternatives are DMA, if supported for the specific pins.



                This may work down to 50-100ns somewhere, where you'll be at the mercy of the analog characteristics of the pins.



                And then of course, you won't be able to get better accuracy than your oscillator allows. You certainly can't use some built-in RC oscillator, but need to use a high accuracy crystal or external oscillator.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Oct 1 at 13:23









                Lundin

                3,416929




                3,416929




















                    up vote
                    5
                    down vote













                    DMA and a timer?



                    A couple of SPI busses and just use the data pins (Possibly again with DMA if you need more then 32 time slots)?



                    My feeling is that 1us should be well doable if you pick your IO pins correctly and are prepared to play a few low level games.



                    100ns I would have to think about, but maybe something devious with loading up a QSPI ram chip then clocking out the bit pattern using a timer as the clock?



                    10ns is FPGA territory.






                    share|improve this answer
























                      up vote
                      5
                      down vote













                      DMA and a timer?



                      A couple of SPI busses and just use the data pins (Possibly again with DMA if you need more then 32 time slots)?



                      My feeling is that 1us should be well doable if you pick your IO pins correctly and are prepared to play a few low level games.



                      100ns I would have to think about, but maybe something devious with loading up a QSPI ram chip then clocking out the bit pattern using a timer as the clock?



                      10ns is FPGA territory.






                      share|improve this answer






















                        up vote
                        5
                        down vote










                        up vote
                        5
                        down vote









                        DMA and a timer?



                        A couple of SPI busses and just use the data pins (Possibly again with DMA if you need more then 32 time slots)?



                        My feeling is that 1us should be well doable if you pick your IO pins correctly and are prepared to play a few low level games.



                        100ns I would have to think about, but maybe something devious with loading up a QSPI ram chip then clocking out the bit pattern using a timer as the clock?



                        10ns is FPGA territory.






                        share|improve this answer












                        DMA and a timer?



                        A couple of SPI busses and just use the data pins (Possibly again with DMA if you need more then 32 time slots)?



                        My feeling is that 1us should be well doable if you pick your IO pins correctly and are prepared to play a few low level games.



                        100ns I would have to think about, but maybe something devious with loading up a QSPI ram chip then clocking out the bit pattern using a timer as the clock?



                        10ns is FPGA territory.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Oct 1 at 13:03









                        Dan Mills

                        10.2k11023




                        10.2k11023




















                            up vote
                            4
                            down vote













                            Check if you have timers which are able to drive multiple pins, typically used for motor control (usually 4 or 6, for H-bridge and 3-phase respectively). In many cases, such timers have "preload" registers which allow you to seamlessly modify the period and the duty cycle, which means you can essentially generate an arbitrary waveform with them. If done right, such waveforms are precise down to the timer resolution.






                            share|improve this answer
















                            • 1




                              Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
                              – ignoramusextraordinaire
                              Oct 1 at 16:43














                            up vote
                            4
                            down vote













                            Check if you have timers which are able to drive multiple pins, typically used for motor control (usually 4 or 6, for H-bridge and 3-phase respectively). In many cases, such timers have "preload" registers which allow you to seamlessly modify the period and the duty cycle, which means you can essentially generate an arbitrary waveform with them. If done right, such waveforms are precise down to the timer resolution.






                            share|improve this answer
















                            • 1




                              Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
                              – ignoramusextraordinaire
                              Oct 1 at 16:43












                            up vote
                            4
                            down vote










                            up vote
                            4
                            down vote









                            Check if you have timers which are able to drive multiple pins, typically used for motor control (usually 4 or 6, for H-bridge and 3-phase respectively). In many cases, such timers have "preload" registers which allow you to seamlessly modify the period and the duty cycle, which means you can essentially generate an arbitrary waveform with them. If done right, such waveforms are precise down to the timer resolution.






                            share|improve this answer












                            Check if you have timers which are able to drive multiple pins, typically used for motor control (usually 4 or 6, for H-bridge and 3-phase respectively). In many cases, such timers have "preload" registers which allow you to seamlessly modify the period and the duty cycle, which means you can essentially generate an arbitrary waveform with them. If done right, such waveforms are precise down to the timer resolution.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Oct 1 at 14:13









                            Dmitry Grigoryev

                            16.8k22771




                            16.8k22771







                            • 1




                              Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
                              – ignoramusextraordinaire
                              Oct 1 at 16:43












                            • 1




                              Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
                              – ignoramusextraordinaire
                              Oct 1 at 16:43







                            1




                            1




                            Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
                            – ignoramusextraordinaire
                            Oct 1 at 16:43




                            Thanks Dmitry - I'm using the ATSAMD51, it has these features and it looks like it might work
                            – ignoramusextraordinaire
                            Oct 1 at 16:43










                            up vote
                            2
                            down vote













                            Maybe something like a Cypress PSOC with the programmable logic cells. I think Microchip has parts with similar capabilities. They're like microcontrollers with tiny, limited FPGA functionality that you can customize. It sounds like you will need some form of either DMA or FPGA to hit your requirements.






                            share|improve this answer
























                              up vote
                              2
                              down vote













                              Maybe something like a Cypress PSOC with the programmable logic cells. I think Microchip has parts with similar capabilities. They're like microcontrollers with tiny, limited FPGA functionality that you can customize. It sounds like you will need some form of either DMA or FPGA to hit your requirements.






                              share|improve this answer






















                                up vote
                                2
                                down vote










                                up vote
                                2
                                down vote









                                Maybe something like a Cypress PSOC with the programmable logic cells. I think Microchip has parts with similar capabilities. They're like microcontrollers with tiny, limited FPGA functionality that you can customize. It sounds like you will need some form of either DMA or FPGA to hit your requirements.






                                share|improve this answer












                                Maybe something like a Cypress PSOC with the programmable logic cells. I think Microchip has parts with similar capabilities. They're like microcontrollers with tiny, limited FPGA functionality that you can customize. It sounds like you will need some form of either DMA or FPGA to hit your requirements.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Oct 1 at 13:39









                                gregb212

                                1514




                                1514




















                                    up vote
                                    2
                                    down vote













                                    STM32 devices have very powerful and configureable timer peripherals. They can be chained or synchronized and offer cycle-accurate outputs. You may want to spend some time reading over the datasheets for the STM32L4 and H4 devices to start, and perhaps reviewing some of STMicro's timer specific documentation.



                                    I'm personally using the timers along with an FPGA to give me microsecond-accurate timing and sequencing for 32 digital outputs. The FPGA is not doing anything timing specific, but rather just MUXing the STM32's excellent and configurable timers to one of 32 outputs. The final hardware will eliminate the STM32 but for prototyping and development it can't be beat.






                                    share|improve this answer
























                                      up vote
                                      2
                                      down vote













                                      STM32 devices have very powerful and configureable timer peripherals. They can be chained or synchronized and offer cycle-accurate outputs. You may want to spend some time reading over the datasheets for the STM32L4 and H4 devices to start, and perhaps reviewing some of STMicro's timer specific documentation.



                                      I'm personally using the timers along with an FPGA to give me microsecond-accurate timing and sequencing for 32 digital outputs. The FPGA is not doing anything timing specific, but rather just MUXing the STM32's excellent and configurable timers to one of 32 outputs. The final hardware will eliminate the STM32 but for prototyping and development it can't be beat.






                                      share|improve this answer






















                                        up vote
                                        2
                                        down vote










                                        up vote
                                        2
                                        down vote









                                        STM32 devices have very powerful and configureable timer peripherals. They can be chained or synchronized and offer cycle-accurate outputs. You may want to spend some time reading over the datasheets for the STM32L4 and H4 devices to start, and perhaps reviewing some of STMicro's timer specific documentation.



                                        I'm personally using the timers along with an FPGA to give me microsecond-accurate timing and sequencing for 32 digital outputs. The FPGA is not doing anything timing specific, but rather just MUXing the STM32's excellent and configurable timers to one of 32 outputs. The final hardware will eliminate the STM32 but for prototyping and development it can't be beat.






                                        share|improve this answer












                                        STM32 devices have very powerful and configureable timer peripherals. They can be chained or synchronized and offer cycle-accurate outputs. You may want to spend some time reading over the datasheets for the STM32L4 and H4 devices to start, and perhaps reviewing some of STMicro's timer specific documentation.



                                        I'm personally using the timers along with an FPGA to give me microsecond-accurate timing and sequencing for 32 digital outputs. The FPGA is not doing anything timing specific, but rather just MUXing the STM32's excellent and configurable timers to one of 32 outputs. The final hardware will eliminate the STM32 but for prototyping and development it can't be beat.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Oct 2 at 1:53









                                        akohlsmith

                                        9,09312752




                                        9,09312752




















                                            up vote
                                            1
                                            down vote













                                            I'm doing something very similar at work with a DSP. I have an FPGA which carries out the precision timing part. I was hoping to use a square wave from the DSP to test the PLL for syncing two FPGAs. In fact I found that although the FPGA timing tolerances were set at around 0.01us based on clock tolerances, my DSP had too much else going on to do better than 0.1us.



                                            This is a pretty full-on processing loop though. With a less intensive loop, it could be better, of course. For the parts which really needed to be deterministic, running them at the very start of the loop can help. Do be warned though that although interrupt latency is predictable, it is very much non-zero! For my platform it's 91 clock ticks at 456MHz. I can easily get sub-us jitter from interrupts, but microsecond delays need the interrupt latency to be baked into the calculations.






                                            share|improve this answer
























                                              up vote
                                              1
                                              down vote













                                              I'm doing something very similar at work with a DSP. I have an FPGA which carries out the precision timing part. I was hoping to use a square wave from the DSP to test the PLL for syncing two FPGAs. In fact I found that although the FPGA timing tolerances were set at around 0.01us based on clock tolerances, my DSP had too much else going on to do better than 0.1us.



                                              This is a pretty full-on processing loop though. With a less intensive loop, it could be better, of course. For the parts which really needed to be deterministic, running them at the very start of the loop can help. Do be warned though that although interrupt latency is predictable, it is very much non-zero! For my platform it's 91 clock ticks at 456MHz. I can easily get sub-us jitter from interrupts, but microsecond delays need the interrupt latency to be baked into the calculations.






                                              share|improve this answer






















                                                up vote
                                                1
                                                down vote










                                                up vote
                                                1
                                                down vote









                                                I'm doing something very similar at work with a DSP. I have an FPGA which carries out the precision timing part. I was hoping to use a square wave from the DSP to test the PLL for syncing two FPGAs. In fact I found that although the FPGA timing tolerances were set at around 0.01us based on clock tolerances, my DSP had too much else going on to do better than 0.1us.



                                                This is a pretty full-on processing loop though. With a less intensive loop, it could be better, of course. For the parts which really needed to be deterministic, running them at the very start of the loop can help. Do be warned though that although interrupt latency is predictable, it is very much non-zero! For my platform it's 91 clock ticks at 456MHz. I can easily get sub-us jitter from interrupts, but microsecond delays need the interrupt latency to be baked into the calculations.






                                                share|improve this answer












                                                I'm doing something very similar at work with a DSP. I have an FPGA which carries out the precision timing part. I was hoping to use a square wave from the DSP to test the PLL for syncing two FPGAs. In fact I found that although the FPGA timing tolerances were set at around 0.01us based on clock tolerances, my DSP had too much else going on to do better than 0.1us.



                                                This is a pretty full-on processing loop though. With a less intensive loop, it could be better, of course. For the parts which really needed to be deterministic, running them at the very start of the loop can help. Do be warned though that although interrupt latency is predictable, it is very much non-zero! For my platform it's 91 clock ticks at 456MHz. I can easily get sub-us jitter from interrupts, but microsecond delays need the interrupt latency to be baked into the calculations.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Oct 1 at 21:23









                                                Graham

                                                2,357411




                                                2,357411




















                                                    ignoramusextraordinaire is a new contributor. Be nice, and check out our Code of Conduct.









                                                     

                                                    draft saved


                                                    draft discarded


















                                                    ignoramusextraordinaire is a new contributor. Be nice, and check out our Code of Conduct.












                                                    ignoramusextraordinaire is a new contributor. Be nice, and check out our Code of Conduct.











                                                    ignoramusextraordinaire is a new contributor. Be nice, and check out our Code of Conduct.













                                                     


                                                    draft saved


                                                    draft discarded














                                                    StackExchange.ready(
                                                    function ()
                                                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f398808%2fsub-microsecond-jitter-accuracy-what-to-use%23new-answer', 'question_page');

                                                    );

                                                    Post as a guest













































































                                                    Popular posts from this blog

                                                    How to check contact read email or not when send email to Individual?

                                                    Displaying single band from multi-band raster using QGIS

                                                    How many registers does an x86_64 CPU actually have?