What is the meaning of this line? “Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with 4-way associativity.”

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
5
down vote

favorite













Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with
4-way associativity.




Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?



Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?










share|improve this question



























    up vote
    5
    down vote

    favorite













    Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with
    4-way associativity.




    Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?



    Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?










    share|improve this question

























      up vote
      5
      down vote

      favorite









      up vote
      5
      down vote

      favorite












      Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with
      4-way associativity.




      Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?



      Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?










      share|improve this question
















      Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with
      4-way associativity.




      Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?



      Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?







      flash cache






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Sep 6 at 7:30









      dim

      12.3k22160




      12.3k22160










      asked Sep 5 at 7:07









      MaNyYaCk

      513419




      513419




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          34
          down vote



          accepted










          The confusion probably comes from the formulation: "memory-mapped, cached view" The fact it is memory-mapped has nothing to do with the fact it is cached. The size of the memory mapping is independant of the size of the cache.



          So, I'll break it down for you:




          Memory-mapped




          Means you can access the contents of the external memory directly by reading/writing the main memory address space (at some specified address). Also typically imply that, if the external memory contains executable code, you can execute this code simply by branching: you don't need to copy the code in internal memory before branching. This is achieved by the MCU which, internally, translates any access to this part of the memory into the required QSPI commands to read/write the external flash on the fly. At this point, it does not imply that there is a cache.




          Cached




          Means that data read from this part of the memory will be placed in a smaller-sized, intermediate memory area (not accessible directly), which the MCU will lookup first when the the external memory will have to be accessed again. This way, when the same data is accessed twice, the external memory does not need to be accessed again. The data from the cache will be retrieved, which is much faster.



          Indeed, this is very useful for memory-mapped QSPI. The QSPI interface is much slower than the CPU: any read/write operation has to be translated into commands sent serially on a few signal lines, which adds a lot of overhead. To reduce this overhead, you'll typically try to read multiple bytes for each QSPI access, and store them in a cache so that, if the next read addresses the neighboring byte (which is likely), you have it ready.




          32kB




          Here, this is the size of the cache, not the size of the memory map. The size of the memory-map will typically be big enough for the whole size of the external memory (check the detailed specs).




          4-way associativity




          This is the way the cache is internally organized. The cache is much smaller than the external memory. The naive way to implement a cache would be to store all the recently accessed bytes along with their corresponding addresses, and, when subsequent accesses are made, check in the whole cache if an existing byte has its address corresponding to the accessed address. This is extremely inefficient. For each byte, you would have to store the address, which multiplies by five the required size for the cache (assuming 32-bit addresses: for each byte, you need the data byte value plus four bytes for the corresponding address), and, for each access, you need to compare the address against 32768 possible values to check if it is already in the cache or not.



          So, here is how it is done:



          • First, the cache is organized in lines of N bytes (e.g. 16 or 32 bytes - Note that the cache line size is not specified in your case). You store the addresses for the whole cache lines, not for each byte, which saves a lot of space.

          • Then, not all possible addresses can be stored anywhere in the cache. When accessing the cache, you take a part of the address, and this will give you the index of a "cache set". Each cache set can contain 4 cache lines (in your case). When checking if the data is in the cache, this means that you only have these 4 cache lines addresses to check because you know that, if it is there, it will necessarily be in this set. This will reduce the complexity of the cache structure a great deal, at the expense of less flexibility in storing the data (meaning a possibly lower cache hit rate, depending on the memory access patterns).

          This is what the cache associativity is: The number of cache lines per set. This gives an indication of the likeliness you can retrieve data in the cache if it has been read before. The bigger the associativity, the better, but it makes the cache more complex and more expensive to manufacture. And at some point, the benefits are not even worth it. 4 is not bad (which is why they are proud to advertise it).






          share|improve this answer


















          • 2




            niiiice! This should be the accepted answer.
            – Marcus Müller
            Sep 5 at 11:04










          • Well done mate! Much more than I would have wrote.+1
            – Sparky256
            Sep 6 at 2:25






          • 1




            I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
            – MaNyYaCk
            Sep 6 at 8:46

















          up vote
          6
          down vote













          The sentence is pretty clear:





          The cache is specified as 32 KB with 4-way associativity.





          So,




          Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?




          Neither. The cache is 32 kB (not Kb, which is Kelvinbit! Watch your capitalization!).




          Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?




          Well, see wikipedia on caches. Yes, a repeated read within the cached regions will fetch the information from the cache. That's what a cache does.



          No, that cache is not necessarily part of the processor, but of the flash controller peripheral.






          share|improve this answer
















          • 4




            Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
            – wjl
            Sep 5 at 13:30






          • 1




            @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
            – kasperd
            Sep 5 at 20:13










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("schematics", function ()
          StackExchange.schematics.init();
          );
          , "cicuitlab");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "135"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f394428%2fwhat-is-the-meaning-of-this-line-memory-mapped-cached-view-of-external-qspi-f%23new-answer', 'question_page');

          );

          Post as a guest






























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          34
          down vote



          accepted










          The confusion probably comes from the formulation: "memory-mapped, cached view" The fact it is memory-mapped has nothing to do with the fact it is cached. The size of the memory mapping is independant of the size of the cache.



          So, I'll break it down for you:




          Memory-mapped




          Means you can access the contents of the external memory directly by reading/writing the main memory address space (at some specified address). Also typically imply that, if the external memory contains executable code, you can execute this code simply by branching: you don't need to copy the code in internal memory before branching. This is achieved by the MCU which, internally, translates any access to this part of the memory into the required QSPI commands to read/write the external flash on the fly. At this point, it does not imply that there is a cache.




          Cached




          Means that data read from this part of the memory will be placed in a smaller-sized, intermediate memory area (not accessible directly), which the MCU will lookup first when the the external memory will have to be accessed again. This way, when the same data is accessed twice, the external memory does not need to be accessed again. The data from the cache will be retrieved, which is much faster.



          Indeed, this is very useful for memory-mapped QSPI. The QSPI interface is much slower than the CPU: any read/write operation has to be translated into commands sent serially on a few signal lines, which adds a lot of overhead. To reduce this overhead, you'll typically try to read multiple bytes for each QSPI access, and store them in a cache so that, if the next read addresses the neighboring byte (which is likely), you have it ready.




          32kB




          Here, this is the size of the cache, not the size of the memory map. The size of the memory-map will typically be big enough for the whole size of the external memory (check the detailed specs).




          4-way associativity




          This is the way the cache is internally organized. The cache is much smaller than the external memory. The naive way to implement a cache would be to store all the recently accessed bytes along with their corresponding addresses, and, when subsequent accesses are made, check in the whole cache if an existing byte has its address corresponding to the accessed address. This is extremely inefficient. For each byte, you would have to store the address, which multiplies by five the required size for the cache (assuming 32-bit addresses: for each byte, you need the data byte value plus four bytes for the corresponding address), and, for each access, you need to compare the address against 32768 possible values to check if it is already in the cache or not.



          So, here is how it is done:



          • First, the cache is organized in lines of N bytes (e.g. 16 or 32 bytes - Note that the cache line size is not specified in your case). You store the addresses for the whole cache lines, not for each byte, which saves a lot of space.

          • Then, not all possible addresses can be stored anywhere in the cache. When accessing the cache, you take a part of the address, and this will give you the index of a "cache set". Each cache set can contain 4 cache lines (in your case). When checking if the data is in the cache, this means that you only have these 4 cache lines addresses to check because you know that, if it is there, it will necessarily be in this set. This will reduce the complexity of the cache structure a great deal, at the expense of less flexibility in storing the data (meaning a possibly lower cache hit rate, depending on the memory access patterns).

          This is what the cache associativity is: The number of cache lines per set. This gives an indication of the likeliness you can retrieve data in the cache if it has been read before. The bigger the associativity, the better, but it makes the cache more complex and more expensive to manufacture. And at some point, the benefits are not even worth it. 4 is not bad (which is why they are proud to advertise it).






          share|improve this answer


















          • 2




            niiiice! This should be the accepted answer.
            – Marcus Müller
            Sep 5 at 11:04










          • Well done mate! Much more than I would have wrote.+1
            – Sparky256
            Sep 6 at 2:25






          • 1




            I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
            – MaNyYaCk
            Sep 6 at 8:46














          up vote
          34
          down vote



          accepted










          The confusion probably comes from the formulation: "memory-mapped, cached view" The fact it is memory-mapped has nothing to do with the fact it is cached. The size of the memory mapping is independant of the size of the cache.



          So, I'll break it down for you:




          Memory-mapped




          Means you can access the contents of the external memory directly by reading/writing the main memory address space (at some specified address). Also typically imply that, if the external memory contains executable code, you can execute this code simply by branching: you don't need to copy the code in internal memory before branching. This is achieved by the MCU which, internally, translates any access to this part of the memory into the required QSPI commands to read/write the external flash on the fly. At this point, it does not imply that there is a cache.




          Cached




          Means that data read from this part of the memory will be placed in a smaller-sized, intermediate memory area (not accessible directly), which the MCU will lookup first when the the external memory will have to be accessed again. This way, when the same data is accessed twice, the external memory does not need to be accessed again. The data from the cache will be retrieved, which is much faster.



          Indeed, this is very useful for memory-mapped QSPI. The QSPI interface is much slower than the CPU: any read/write operation has to be translated into commands sent serially on a few signal lines, which adds a lot of overhead. To reduce this overhead, you'll typically try to read multiple bytes for each QSPI access, and store them in a cache so that, if the next read addresses the neighboring byte (which is likely), you have it ready.




          32kB




          Here, this is the size of the cache, not the size of the memory map. The size of the memory-map will typically be big enough for the whole size of the external memory (check the detailed specs).




          4-way associativity




          This is the way the cache is internally organized. The cache is much smaller than the external memory. The naive way to implement a cache would be to store all the recently accessed bytes along with their corresponding addresses, and, when subsequent accesses are made, check in the whole cache if an existing byte has its address corresponding to the accessed address. This is extremely inefficient. For each byte, you would have to store the address, which multiplies by five the required size for the cache (assuming 32-bit addresses: for each byte, you need the data byte value plus four bytes for the corresponding address), and, for each access, you need to compare the address against 32768 possible values to check if it is already in the cache or not.



          So, here is how it is done:



          • First, the cache is organized in lines of N bytes (e.g. 16 or 32 bytes - Note that the cache line size is not specified in your case). You store the addresses for the whole cache lines, not for each byte, which saves a lot of space.

          • Then, not all possible addresses can be stored anywhere in the cache. When accessing the cache, you take a part of the address, and this will give you the index of a "cache set". Each cache set can contain 4 cache lines (in your case). When checking if the data is in the cache, this means that you only have these 4 cache lines addresses to check because you know that, if it is there, it will necessarily be in this set. This will reduce the complexity of the cache structure a great deal, at the expense of less flexibility in storing the data (meaning a possibly lower cache hit rate, depending on the memory access patterns).

          This is what the cache associativity is: The number of cache lines per set. This gives an indication of the likeliness you can retrieve data in the cache if it has been read before. The bigger the associativity, the better, but it makes the cache more complex and more expensive to manufacture. And at some point, the benefits are not even worth it. 4 is not bad (which is why they are proud to advertise it).






          share|improve this answer


















          • 2




            niiiice! This should be the accepted answer.
            – Marcus Müller
            Sep 5 at 11:04










          • Well done mate! Much more than I would have wrote.+1
            – Sparky256
            Sep 6 at 2:25






          • 1




            I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
            – MaNyYaCk
            Sep 6 at 8:46












          up vote
          34
          down vote



          accepted







          up vote
          34
          down vote



          accepted






          The confusion probably comes from the formulation: "memory-mapped, cached view" The fact it is memory-mapped has nothing to do with the fact it is cached. The size of the memory mapping is independant of the size of the cache.



          So, I'll break it down for you:




          Memory-mapped




          Means you can access the contents of the external memory directly by reading/writing the main memory address space (at some specified address). Also typically imply that, if the external memory contains executable code, you can execute this code simply by branching: you don't need to copy the code in internal memory before branching. This is achieved by the MCU which, internally, translates any access to this part of the memory into the required QSPI commands to read/write the external flash on the fly. At this point, it does not imply that there is a cache.




          Cached




          Means that data read from this part of the memory will be placed in a smaller-sized, intermediate memory area (not accessible directly), which the MCU will lookup first when the the external memory will have to be accessed again. This way, when the same data is accessed twice, the external memory does not need to be accessed again. The data from the cache will be retrieved, which is much faster.



          Indeed, this is very useful for memory-mapped QSPI. The QSPI interface is much slower than the CPU: any read/write operation has to be translated into commands sent serially on a few signal lines, which adds a lot of overhead. To reduce this overhead, you'll typically try to read multiple bytes for each QSPI access, and store them in a cache so that, if the next read addresses the neighboring byte (which is likely), you have it ready.




          32kB




          Here, this is the size of the cache, not the size of the memory map. The size of the memory-map will typically be big enough for the whole size of the external memory (check the detailed specs).




          4-way associativity




          This is the way the cache is internally organized. The cache is much smaller than the external memory. The naive way to implement a cache would be to store all the recently accessed bytes along with their corresponding addresses, and, when subsequent accesses are made, check in the whole cache if an existing byte has its address corresponding to the accessed address. This is extremely inefficient. For each byte, you would have to store the address, which multiplies by five the required size for the cache (assuming 32-bit addresses: for each byte, you need the data byte value plus four bytes for the corresponding address), and, for each access, you need to compare the address against 32768 possible values to check if it is already in the cache or not.



          So, here is how it is done:



          • First, the cache is organized in lines of N bytes (e.g. 16 or 32 bytes - Note that the cache line size is not specified in your case). You store the addresses for the whole cache lines, not for each byte, which saves a lot of space.

          • Then, not all possible addresses can be stored anywhere in the cache. When accessing the cache, you take a part of the address, and this will give you the index of a "cache set". Each cache set can contain 4 cache lines (in your case). When checking if the data is in the cache, this means that you only have these 4 cache lines addresses to check because you know that, if it is there, it will necessarily be in this set. This will reduce the complexity of the cache structure a great deal, at the expense of less flexibility in storing the data (meaning a possibly lower cache hit rate, depending on the memory access patterns).

          This is what the cache associativity is: The number of cache lines per set. This gives an indication of the likeliness you can retrieve data in the cache if it has been read before. The bigger the associativity, the better, but it makes the cache more complex and more expensive to manufacture. And at some point, the benefits are not even worth it. 4 is not bad (which is why they are proud to advertise it).






          share|improve this answer














          The confusion probably comes from the formulation: "memory-mapped, cached view" The fact it is memory-mapped has nothing to do with the fact it is cached. The size of the memory mapping is independant of the size of the cache.



          So, I'll break it down for you:




          Memory-mapped




          Means you can access the contents of the external memory directly by reading/writing the main memory address space (at some specified address). Also typically imply that, if the external memory contains executable code, you can execute this code simply by branching: you don't need to copy the code in internal memory before branching. This is achieved by the MCU which, internally, translates any access to this part of the memory into the required QSPI commands to read/write the external flash on the fly. At this point, it does not imply that there is a cache.




          Cached




          Means that data read from this part of the memory will be placed in a smaller-sized, intermediate memory area (not accessible directly), which the MCU will lookup first when the the external memory will have to be accessed again. This way, when the same data is accessed twice, the external memory does not need to be accessed again. The data from the cache will be retrieved, which is much faster.



          Indeed, this is very useful for memory-mapped QSPI. The QSPI interface is much slower than the CPU: any read/write operation has to be translated into commands sent serially on a few signal lines, which adds a lot of overhead. To reduce this overhead, you'll typically try to read multiple bytes for each QSPI access, and store them in a cache so that, if the next read addresses the neighboring byte (which is likely), you have it ready.




          32kB




          Here, this is the size of the cache, not the size of the memory map. The size of the memory-map will typically be big enough for the whole size of the external memory (check the detailed specs).




          4-way associativity




          This is the way the cache is internally organized. The cache is much smaller than the external memory. The naive way to implement a cache would be to store all the recently accessed bytes along with their corresponding addresses, and, when subsequent accesses are made, check in the whole cache if an existing byte has its address corresponding to the accessed address. This is extremely inefficient. For each byte, you would have to store the address, which multiplies by five the required size for the cache (assuming 32-bit addresses: for each byte, you need the data byte value plus four bytes for the corresponding address), and, for each access, you need to compare the address against 32768 possible values to check if it is already in the cache or not.



          So, here is how it is done:



          • First, the cache is organized in lines of N bytes (e.g. 16 or 32 bytes - Note that the cache line size is not specified in your case). You store the addresses for the whole cache lines, not for each byte, which saves a lot of space.

          • Then, not all possible addresses can be stored anywhere in the cache. When accessing the cache, you take a part of the address, and this will give you the index of a "cache set". Each cache set can contain 4 cache lines (in your case). When checking if the data is in the cache, this means that you only have these 4 cache lines addresses to check because you know that, if it is there, it will necessarily be in this set. This will reduce the complexity of the cache structure a great deal, at the expense of less flexibility in storing the data (meaning a possibly lower cache hit rate, depending on the memory access patterns).

          This is what the cache associativity is: The number of cache lines per set. This gives an indication of the likeliness you can retrieve data in the cache if it has been read before. The bigger the associativity, the better, but it makes the cache more complex and more expensive to manufacture. And at some point, the benefits are not even worth it. 4 is not bad (which is why they are proud to advertise it).







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Sep 5 at 13:35

























          answered Sep 5 at 7:59









          dim

          12.3k22160




          12.3k22160







          • 2




            niiiice! This should be the accepted answer.
            – Marcus Müller
            Sep 5 at 11:04










          • Well done mate! Much more than I would have wrote.+1
            – Sparky256
            Sep 6 at 2:25






          • 1




            I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
            – MaNyYaCk
            Sep 6 at 8:46












          • 2




            niiiice! This should be the accepted answer.
            – Marcus Müller
            Sep 5 at 11:04










          • Well done mate! Much more than I would have wrote.+1
            – Sparky256
            Sep 6 at 2:25






          • 1




            I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
            – MaNyYaCk
            Sep 6 at 8:46







          2




          2




          niiiice! This should be the accepted answer.
          – Marcus Müller
          Sep 5 at 11:04




          niiiice! This should be the accepted answer.
          – Marcus Müller
          Sep 5 at 11:04












          Well done mate! Much more than I would have wrote.+1
          – Sparky256
          Sep 6 at 2:25




          Well done mate! Much more than I would have wrote.+1
          – Sparky256
          Sep 6 at 2:25




          1




          1




          I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
          – MaNyYaCk
          Sep 6 at 8:46




          I know comments are not to be used for this but Thanks! This is way more information than I asked and it was too much helpful.
          – MaNyYaCk
          Sep 6 at 8:46












          up vote
          6
          down vote













          The sentence is pretty clear:





          The cache is specified as 32 KB with 4-way associativity.





          So,




          Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?




          Neither. The cache is 32 kB (not Kb, which is Kelvinbit! Watch your capitalization!).




          Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?




          Well, see wikipedia on caches. Yes, a repeated read within the cached regions will fetch the information from the cache. That's what a cache does.



          No, that cache is not necessarily part of the processor, but of the flash controller peripheral.






          share|improve this answer
















          • 4




            Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
            – wjl
            Sep 5 at 13:30






          • 1




            @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
            – kasperd
            Sep 5 at 20:13














          up vote
          6
          down vote













          The sentence is pretty clear:





          The cache is specified as 32 KB with 4-way associativity.





          So,




          Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?




          Neither. The cache is 32 kB (not Kb, which is Kelvinbit! Watch your capitalization!).




          Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?




          Well, see wikipedia on caches. Yes, a repeated read within the cached regions will fetch the information from the cache. That's what a cache does.



          No, that cache is not necessarily part of the processor, but of the flash controller peripheral.






          share|improve this answer
















          • 4




            Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
            – wjl
            Sep 5 at 13:30






          • 1




            @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
            – kasperd
            Sep 5 at 20:13












          up vote
          6
          down vote










          up vote
          6
          down vote









          The sentence is pretty clear:





          The cache is specified as 32 KB with 4-way associativity.





          So,




          Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?




          Neither. The cache is 32 kB (not Kb, which is Kelvinbit! Watch your capitalization!).




          Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?




          Well, see wikipedia on caches. Yes, a repeated read within the cached regions will fetch the information from the cache. That's what a cache does.



          No, that cache is not necessarily part of the processor, but of the flash controller peripheral.






          share|improve this answer












          The sentence is pretty clear:





          The cache is specified as 32 KB with 4-way associativity.





          So,




          Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?




          Neither. The cache is 32 kB (not Kb, which is Kelvinbit! Watch your capitalization!).




          Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?




          Well, see wikipedia on caches. Yes, a repeated read within the cached regions will fetch the information from the cache. That's what a cache does.



          No, that cache is not necessarily part of the processor, but of the flash controller peripheral.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Sep 5 at 7:11









          Marcus Müller

          28.6k35388




          28.6k35388







          • 4




            Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
            – wjl
            Sep 5 at 13:30






          • 1




            @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
            – kasperd
            Sep 5 at 20:13












          • 4




            Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
            – wjl
            Sep 5 at 13:30






          • 1




            @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
            – kasperd
            Sep 5 at 20:13







          4




          4




          Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
          – wjl
          Sep 5 at 13:30




          Nice answer. But while we're nitpicking on notation, the cache isn't 32 kB either (32000 bytes), it's 32 KiB (32*1024 bytes).
          – wjl
          Sep 5 at 13:30




          1




          1




          @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
          – kasperd
          Sep 5 at 20:13




          @wjl KB and KiB both have the same meaning. KB has been in use for much longer though. The prefix k means 1000. The prefixes K and Ki both mean 1024.
          – kasperd
          Sep 5 at 20:13

















           

          draft saved


          draft discarded















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f394428%2fwhat-is-the-meaning-of-this-line-memory-mapped-cached-view-of-external-qspi-f%23new-answer', 'question_page');

          );

          Post as a guest













































































          Popular posts from this blog

          How to check contact read email or not when send email to Individual?

          Bahrain

          Postfix configuration issue with fips on centos 7; mailgun relay