Why is sigset_t in glibc/musl 128 bytes large on 64-bit Linux?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Why is sigset_t on 64-bit Linux 128 bytes large in glibc and musl?



#include <signal.h>
#include <stdio.h>
int main()

printf("%zun", sizeof(sigset_t)); //prints 128 with both glibc and musl



Shouldn't 64 / 8 = 8 (number_of_signals / CHAR_BIT) be enough?







share|improve this question






















  • @thrig elixir.free-electrons.com/linux/latest/source/arch/ia64/include/… suggests the kernel's defining it as a 64 bit quantity (8 bytes), so I have no idea what the libc's are doing. I wish they at least commented that stuff.
    – PSkocik
    Oct 20 '17 at 14:09










  • my bad: glibc HAS commented it.
    – PSkocik
    Oct 20 '17 at 14:17














up vote
0
down vote

favorite












Why is sigset_t on 64-bit Linux 128 bytes large in glibc and musl?



#include <signal.h>
#include <stdio.h>
int main()

printf("%zun", sizeof(sigset_t)); //prints 128 with both glibc and musl



Shouldn't 64 / 8 = 8 (number_of_signals / CHAR_BIT) be enough?







share|improve this question






















  • @thrig elixir.free-electrons.com/linux/latest/source/arch/ia64/include/… suggests the kernel's defining it as a 64 bit quantity (8 bytes), so I have no idea what the libc's are doing. I wish they at least commented that stuff.
    – PSkocik
    Oct 20 '17 at 14:09










  • my bad: glibc HAS commented it.
    – PSkocik
    Oct 20 '17 at 14:17












up vote
0
down vote

favorite









up vote
0
down vote

favorite











Why is sigset_t on 64-bit Linux 128 bytes large in glibc and musl?



#include <signal.h>
#include <stdio.h>
int main()

printf("%zun", sizeof(sigset_t)); //prints 128 with both glibc and musl



Shouldn't 64 / 8 = 8 (number_of_signals / CHAR_BIT) be enough?







share|improve this question














Why is sigset_t on 64-bit Linux 128 bytes large in glibc and musl?



#include <signal.h>
#include <stdio.h>
int main()

printf("%zun", sizeof(sigset_t)); //prints 128 with both glibc and musl



Shouldn't 64 / 8 = 8 (number_of_signals / CHAR_BIT) be enough?









share|improve this question













share|improve this question




share|improve this question








edited Oct 20 '17 at 18:22









kiamlaluno

362220




362220










asked Oct 20 '17 at 13:22









PSkocik

17.2k24589




17.2k24589











  • @thrig elixir.free-electrons.com/linux/latest/source/arch/ia64/include/… suggests the kernel's defining it as a 64 bit quantity (8 bytes), so I have no idea what the libc's are doing. I wish they at least commented that stuff.
    – PSkocik
    Oct 20 '17 at 14:09










  • my bad: glibc HAS commented it.
    – PSkocik
    Oct 20 '17 at 14:17
















  • @thrig elixir.free-electrons.com/linux/latest/source/arch/ia64/include/… suggests the kernel's defining it as a 64 bit quantity (8 bytes), so I have no idea what the libc's are doing. I wish they at least commented that stuff.
    – PSkocik
    Oct 20 '17 at 14:09










  • my bad: glibc HAS commented it.
    – PSkocik
    Oct 20 '17 at 14:17















@thrig elixir.free-electrons.com/linux/latest/source/arch/ia64/include/… suggests the kernel's defining it as a 64 bit quantity (8 bytes), so I have no idea what the libc's are doing. I wish they at least commented that stuff.
– PSkocik
Oct 20 '17 at 14:09




@thrig elixir.free-electrons.com/linux/latest/source/arch/ia64/include/… suggests the kernel's defining it as a 64 bit quantity (8 bytes), so I have no idea what the libc's are doing. I wish they at least commented that stuff.
– PSkocik
Oct 20 '17 at 14:09












my bad: glibc HAS commented it.
– PSkocik
Oct 20 '17 at 14:17




my bad: glibc HAS commented it.
– PSkocik
Oct 20 '17 at 14:17










1 Answer
1






active

oldest

votes

















up vote
5
down vote



accepted










I don’t know the original reason; back in 1996, the Linux-specific header was added with the following definition:



/* A `sigset_t' has a bit for each signal. Having 32 * 4 * 8 bits gives 
us up to 1024 signals. */
#define _SIGSET_NWORDS 32
typedef struct

unsigned int __val[_SIGSET_NWORDS];
__sigset_t;


and this “1024 signal” limit has been preserved in the current definition:



/* A `sigset_t' has a bit for each signal. */ 

#define _SIGSET_NWORDS (1024 / (8 * sizeof (unsigned long int)))
typedef struct

unsigned long int __val[_SIGSET_NWORDS];
__sigset_t;


which makes the 1024-based calculation clearer (and results in 16 unsigned longs on 64-bit x86, i.e. 128 bytes).



Presumably the glibc maintainers wanted to leave room for growth...



musl aims for ABI compatibility with glibc for sigaction, so it uses the same 1024-bit (128-byte) size:



TYPEDEF struct __sigset_t unsigned long __bits[128/sizeof(long)]; sigset_t;





share|improve this answer






















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f399342%2fwhy-is-sigset-t-in-glibc-musl-128-bytes-large-on-64-bit-linux%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    5
    down vote



    accepted










    I don’t know the original reason; back in 1996, the Linux-specific header was added with the following definition:



    /* A `sigset_t' has a bit for each signal. Having 32 * 4 * 8 bits gives 
    us up to 1024 signals. */
    #define _SIGSET_NWORDS 32
    typedef struct

    unsigned int __val[_SIGSET_NWORDS];
    __sigset_t;


    and this “1024 signal” limit has been preserved in the current definition:



    /* A `sigset_t' has a bit for each signal. */ 

    #define _SIGSET_NWORDS (1024 / (8 * sizeof (unsigned long int)))
    typedef struct

    unsigned long int __val[_SIGSET_NWORDS];
    __sigset_t;


    which makes the 1024-based calculation clearer (and results in 16 unsigned longs on 64-bit x86, i.e. 128 bytes).



    Presumably the glibc maintainers wanted to leave room for growth...



    musl aims for ABI compatibility with glibc for sigaction, so it uses the same 1024-bit (128-byte) size:



    TYPEDEF struct __sigset_t unsigned long __bits[128/sizeof(long)]; sigset_t;





    share|improve this answer


























      up vote
      5
      down vote



      accepted










      I don’t know the original reason; back in 1996, the Linux-specific header was added with the following definition:



      /* A `sigset_t' has a bit for each signal. Having 32 * 4 * 8 bits gives 
      us up to 1024 signals. */
      #define _SIGSET_NWORDS 32
      typedef struct

      unsigned int __val[_SIGSET_NWORDS];
      __sigset_t;


      and this “1024 signal” limit has been preserved in the current definition:



      /* A `sigset_t' has a bit for each signal. */ 

      #define _SIGSET_NWORDS (1024 / (8 * sizeof (unsigned long int)))
      typedef struct

      unsigned long int __val[_SIGSET_NWORDS];
      __sigset_t;


      which makes the 1024-based calculation clearer (and results in 16 unsigned longs on 64-bit x86, i.e. 128 bytes).



      Presumably the glibc maintainers wanted to leave room for growth...



      musl aims for ABI compatibility with glibc for sigaction, so it uses the same 1024-bit (128-byte) size:



      TYPEDEF struct __sigset_t unsigned long __bits[128/sizeof(long)]; sigset_t;





      share|improve this answer
























        up vote
        5
        down vote



        accepted







        up vote
        5
        down vote



        accepted






        I don’t know the original reason; back in 1996, the Linux-specific header was added with the following definition:



        /* A `sigset_t' has a bit for each signal. Having 32 * 4 * 8 bits gives 
        us up to 1024 signals. */
        #define _SIGSET_NWORDS 32
        typedef struct

        unsigned int __val[_SIGSET_NWORDS];
        __sigset_t;


        and this “1024 signal” limit has been preserved in the current definition:



        /* A `sigset_t' has a bit for each signal. */ 

        #define _SIGSET_NWORDS (1024 / (8 * sizeof (unsigned long int)))
        typedef struct

        unsigned long int __val[_SIGSET_NWORDS];
        __sigset_t;


        which makes the 1024-based calculation clearer (and results in 16 unsigned longs on 64-bit x86, i.e. 128 bytes).



        Presumably the glibc maintainers wanted to leave room for growth...



        musl aims for ABI compatibility with glibc for sigaction, so it uses the same 1024-bit (128-byte) size:



        TYPEDEF struct __sigset_t unsigned long __bits[128/sizeof(long)]; sigset_t;





        share|improve this answer














        I don’t know the original reason; back in 1996, the Linux-specific header was added with the following definition:



        /* A `sigset_t' has a bit for each signal. Having 32 * 4 * 8 bits gives 
        us up to 1024 signals. */
        #define _SIGSET_NWORDS 32
        typedef struct

        unsigned int __val[_SIGSET_NWORDS];
        __sigset_t;


        and this “1024 signal” limit has been preserved in the current definition:



        /* A `sigset_t' has a bit for each signal. */ 

        #define _SIGSET_NWORDS (1024 / (8 * sizeof (unsigned long int)))
        typedef struct

        unsigned long int __val[_SIGSET_NWORDS];
        __sigset_t;


        which makes the 1024-based calculation clearer (and results in 16 unsigned longs on 64-bit x86, i.e. 128 bytes).



        Presumably the glibc maintainers wanted to leave room for growth...



        musl aims for ABI compatibility with glibc for sigaction, so it uses the same 1024-bit (128-byte) size:



        TYPEDEF struct __sigset_t unsigned long __bits[128/sizeof(long)]; sigset_t;






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Oct 20 '17 at 14:28

























        answered Oct 20 '17 at 14:15









        Stephen Kitt

        144k22313378




        144k22313378



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f399342%2fwhy-is-sigset-t-in-glibc-musl-128-bytes-large-on-64-bit-linux%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?