What is the benefit of /etc/apt/sources.list.d over /etc/apt/sources.list

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
12
down vote

favorite












I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat.



/etc/apt/sources.d


Means I have half a dozen files to parse instead of just one.



AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2).



Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse.



I must add, I love change, however, ONLY when there are benefits introduced by the change.



Edit after first response:




It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries.




Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ...




It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file.




Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin.




It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories.




I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file.







share|improve this question


















  • 2




    The comments and question edits have quickly drifted from "trying to answer the question" to "ranting about the existence of the problem". The useful comments already appear in the accepted answer
    – Michael Mrozek♦
    Nov 28 '17 at 3:21














up vote
12
down vote

favorite












I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat.



/etc/apt/sources.d


Means I have half a dozen files to parse instead of just one.



AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2).



Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse.



I must add, I love change, however, ONLY when there are benefits introduced by the change.



Edit after first response:




It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries.




Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ...




It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file.




Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin.




It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories.




I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file.







share|improve this question


















  • 2




    The comments and question edits have quickly drifted from "trying to answer the question" to "ranting about the existence of the problem". The useful comments already appear in the accepted answer
    – Michael Mrozek♦
    Nov 28 '17 at 3:21












up vote
12
down vote

favorite









up vote
12
down vote

favorite











I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat.



/etc/apt/sources.d


Means I have half a dozen files to parse instead of just one.



AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2).



Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse.



I must add, I love change, however, ONLY when there are benefits introduced by the change.



Edit after first response:




It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries.




Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ...




It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file.




Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin.




It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories.




I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file.







share|improve this question














I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat.



/etc/apt/sources.d


Means I have half a dozen files to parse instead of just one.



AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2).



Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse.



I must add, I love change, however, ONLY when there are benefits introduced by the change.



Edit after first response:




It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries.




Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ...




It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file.




Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin.




It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories.




I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file.









share|improve this question













share|improve this question




share|improve this question








edited Nov 28 '17 at 13:04









rchard2scout

1095




1095










asked Nov 27 '17 at 17:02









thecarpy

2,210824




2,210824







  • 2




    The comments and question edits have quickly drifted from "trying to answer the question" to "ranting about the existence of the problem". The useful comments already appear in the accepted answer
    – Michael Mrozek♦
    Nov 28 '17 at 3:21












  • 2




    The comments and question edits have quickly drifted from "trying to answer the question" to "ranting about the existence of the problem". The useful comments already appear in the accepted answer
    – Michael Mrozek♦
    Nov 28 '17 at 3:21







2




2




The comments and question edits have quickly drifted from "trying to answer the question" to "ranting about the existence of the problem". The useful comments already appear in the accepted answer
– Michael Mrozek♦
Nov 28 '17 at 3:21




The comments and question edits have quickly drifted from "trying to answer the question" to "ranting about the existence of the problem". The useful comments already appear in the accepted answer
– Michael Mrozek♦
Nov 28 '17 at 3:21










6 Answers
6






active

oldest

votes

















up vote
13
down vote



accepted










On a technical level, as someone who has had to handle these changes in a few large and popular system info tools, basically it comes down to this:



For sources.list.d/



# to add
if [[ ! -e /etc/apt/sources.list.d/some_repo.list ]];then
echo 'some repo line for apt' > /etc/apt/sources.list.d/some_repo.list
fi

# to delete
if [[ -e /etc/apt/sources.list.d/some_repo.list ]];then
rm -f /etc/apt/sources.list.d/some_repo.list
fi


Note that unless they are also doing the same check as below, if you had commented out a repo line, these tests would be wrong. If they are doing the same check as below, then it's the same exact complexity, except carried out over many files, not one. Also, unless they are checking ALL possible files, they can, and often do, add a duplicate item, which then makes apt complain, until you delete one of them.



For sources.list



# to add. Respect commented out lines. Bonus points for uncommenting
# line instead of adding a new line
if [[ -z $( grep -E 's*[^#]s*some repo line for apt' /etc/apt/sources.list ) ]];then
echo 'some repo line for apt' >> /etc/apt/sources.list
fi

# to delete. Delete whether commented out or not. Bonus for not
# deleting if commented out, thus respecting the user's wishes
sed -i '/.*some repo line for apt.*/d' /etc/apt/sources.list


The Google Chrome devs didn't check for the presence of Google Chrome sources, relying on the exact file name their Chrome package would create to be present. In all other cases, they would create a new file in sources.list.d named exactly the way they wanted.



To see what sources you have, of course, it's not so pretty, since you can't get easier to read and maintain than:



cat /etc/sources.list


So this was basically done for the purpose of automated updates, and to provide easy single commands you could give to users, as far as I can tell. For users, it means that they have to read many files instead of 1 file to see if they have a repo added, and for apt, it means it has to read many files instead of one file as well.



Since in the real world, if you were going to do this well, you have to support checks against all the files, regardless of what they are named, and then test if the action to be carried out is required or not required.



However, if you were not going to do it well, you'd just ignore the checks to see if the item is somewhere in sources, and just check for the file name. I believe that's what most automated stuff does, but since in the end, I simply had to check everything so I could list it and act based on if one of those files matched, the only real result was making it a lot more complicated.



Bulk Edits



Given running many servers, I'd be tempted to just script a nightly job that loops through /etc/apt/sources.list.d/ and checks first to make sure the item is not in sources.list already, then if it is not, add that item to sources.list, delete the sources.list.d file, and if already in sources.list, just delete the sources.list.d file



Since there is NO negative to using only sources.list beyond simplicity and ease of maintenance, adding something like that might not be a bad idea, particularly given creative random actions by sys admins.



As noted in the above comment, inxi -r will neatly print out per file the active repos, but will not of course edit or alter them, so that would be only half the solution. If it's many distributions, it's a pain learning how each does it, that's for sure, and randomness certainly is the rule rather than the exception sadly.






share|improve this answer






















  • Comments are not for extended discussion; this conversation has been moved to chat.
    – terdon♦
    Nov 30 '17 at 11:05

















up vote
37
down vote













Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically:



  • It allows new installations that need their own repos to not have to
    search a flat file to ensure that it is not adding duplicate entries.

  • It allows a system administrator to easily disable (by renaming) or
    remove (by deleting) a repository set without having to edit a
    monolithic file.

  • It allows a package maintainer to give a simple
    command to update repository locations without having to worry about
    inadvertently changing the configuration for unrelated repositories.





share|improve this answer
















  • 11




    This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
    – Nemo
    Nov 28 '17 at 0:15






  • 12




    Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
    – Martijn Heemels
    Nov 28 '17 at 1:40






  • 2




    @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
    – smitelli
    Nov 28 '17 at 2:45






  • 3




    @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
    – DopeGhoti
    Nov 28 '17 at 4:08






  • 7




    Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
    – viraptor
    Nov 28 '17 at 4:28


















up vote
10
down vote













If you're manually managing your servers I'll agree it makes things more confusing. However, it benefits programmatic management (i.e. "configuration as code"). When using config management software like Puppet, Ansible, Chef, etc., it's easier to just drop or remove a file in a dir and run apt update, instead of parsing a file to add or remove certain lines.



Especially since that avoids having to manage the contents of a single 'file' resource, e.g: /etc/apt/sources.list, from multiple independent modules that have been written by third parties.



I appreciate Ubuntu's broad use of ".d" dirs for this particular reason, i.e. sudoers.d, rsyslog.d, sysctl.d., cron.d, logrotate.d, etc.






share|improve this answer





























    up vote
    5
    down vote













    As nemo pointed out in a comment, one of the key advantages of a directory is it allows for the notion of "ownership".



    Modern Linux distributions and installers are all based around the idea of packages - independent pieces of software which can, as much as possible, be added and removed atomically. Whenever you install a package with dpkg (and therefore apt), it keeps track of which files on the system were created by that installer. Uninstalling the package can then largely consist of deleting those files.



    The currently accepted answer takes it as a bad thing that a Google Chrome installer assumed that it should only create or delete an entry in the location it expected, but automating anything else leads to all sorts of horrible edge cases; for instance:



    • If the line exists in sources.list when installing, but is commented out, should the installer uncomment it, or add a duplicate?

    • If the uninstaller removes the line, but the user has added or edited comments next to it, the file will be left with broken commentary.

    • If the user manually added the line, the installer could know not to add it; but how would the uninstaller know not to remove it?

    Separate files are not required for ownership; for instance, the installer could include a block of comments stating that it "owns" a particular set of lines. In that case, it would always search the file for that exact block, not for some other mention of the same repository.



    All else being equal, automating edits to a single configuration file will always be more complicated than automating creation and deletion of a separate file. At the very least, removing lines requires use of some pattern-matching tool such as sed. In a more complex file, both adding and removing lines might require a scripting tool with knowledge of the file format, to add to an appropriate section, or remove without damaging surrounding formatting.



    Since an installer would need to avoid messing with manually edited configuration anyway, it makes sense to put automated, tool-owned, configuration in a format that is easy for automated tools to manage.






    share|improve this answer





























      up vote
      3
      down vote













      This allows packages to add extra sources without resorting to scripts.



      For example, when you install Microsoft's Skype package, a source for skype.com is automatically configured to download updates; removing the Skype package from the system also disables this package source again.



      If you wanted to have the same effect with a flat file, then the installation scripts for Skype would need to modify your sources.list, which probably a lot of system administrators would find slightly unnerving.






      share|improve this answer



























        up vote
        -3
        down vote













        I'm not convinced that there is a good reason - other than it seems fashionable. To me, it breaks a rule that a directory should either be a leaf or a node - ie that it should contain only files or directories, not a mixture of both.



        I suppose that it does make files smaller, so easier to read - in the case for instance of sudo rules, which can be quite long, it does make it easier to have a standardized set of rules for one type of user (say a developer), and add those to the config directory if devs should be allowed to sudo on this machine; thus you need to maintain fewer files - just a file for devs, for admins, for sysops, etc, rather than for every possible combination thereof.



        There, I've contradicted myself.






        share|improve this answer
















        • 3




          I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
          – smitelli
          Nov 28 '17 at 3:00











        • I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
          – Graham Nicholls
          Nov 28 '17 at 10:28







        • 2




          There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
          – reinierpost
          Nov 28 '17 at 10:41










        • Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
          – Graham Nicholls
          Dec 1 '17 at 19:37










        Your Answer







        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "106"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f407328%2fwhat-is-the-benefit-of-etc-apt-sources-list-d-over-etc-apt-sources-list%23new-answer', 'question_page');

        );

        Post as a guest






























        6 Answers
        6






        active

        oldest

        votes








        6 Answers
        6






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        13
        down vote



        accepted










        On a technical level, as someone who has had to handle these changes in a few large and popular system info tools, basically it comes down to this:



        For sources.list.d/



        # to add
        if [[ ! -e /etc/apt/sources.list.d/some_repo.list ]];then
        echo 'some repo line for apt' > /etc/apt/sources.list.d/some_repo.list
        fi

        # to delete
        if [[ -e /etc/apt/sources.list.d/some_repo.list ]];then
        rm -f /etc/apt/sources.list.d/some_repo.list
        fi


        Note that unless they are also doing the same check as below, if you had commented out a repo line, these tests would be wrong. If they are doing the same check as below, then it's the same exact complexity, except carried out over many files, not one. Also, unless they are checking ALL possible files, they can, and often do, add a duplicate item, which then makes apt complain, until you delete one of them.



        For sources.list



        # to add. Respect commented out lines. Bonus points for uncommenting
        # line instead of adding a new line
        if [[ -z $( grep -E 's*[^#]s*some repo line for apt' /etc/apt/sources.list ) ]];then
        echo 'some repo line for apt' >> /etc/apt/sources.list
        fi

        # to delete. Delete whether commented out or not. Bonus for not
        # deleting if commented out, thus respecting the user's wishes
        sed -i '/.*some repo line for apt.*/d' /etc/apt/sources.list


        The Google Chrome devs didn't check for the presence of Google Chrome sources, relying on the exact file name their Chrome package would create to be present. In all other cases, they would create a new file in sources.list.d named exactly the way they wanted.



        To see what sources you have, of course, it's not so pretty, since you can't get easier to read and maintain than:



        cat /etc/sources.list


        So this was basically done for the purpose of automated updates, and to provide easy single commands you could give to users, as far as I can tell. For users, it means that they have to read many files instead of 1 file to see if they have a repo added, and for apt, it means it has to read many files instead of one file as well.



        Since in the real world, if you were going to do this well, you have to support checks against all the files, regardless of what they are named, and then test if the action to be carried out is required or not required.



        However, if you were not going to do it well, you'd just ignore the checks to see if the item is somewhere in sources, and just check for the file name. I believe that's what most automated stuff does, but since in the end, I simply had to check everything so I could list it and act based on if one of those files matched, the only real result was making it a lot more complicated.



        Bulk Edits



        Given running many servers, I'd be tempted to just script a nightly job that loops through /etc/apt/sources.list.d/ and checks first to make sure the item is not in sources.list already, then if it is not, add that item to sources.list, delete the sources.list.d file, and if already in sources.list, just delete the sources.list.d file



        Since there is NO negative to using only sources.list beyond simplicity and ease of maintenance, adding something like that might not be a bad idea, particularly given creative random actions by sys admins.



        As noted in the above comment, inxi -r will neatly print out per file the active repos, but will not of course edit or alter them, so that would be only half the solution. If it's many distributions, it's a pain learning how each does it, that's for sure, and randomness certainly is the rule rather than the exception sadly.






        share|improve this answer






















        • Comments are not for extended discussion; this conversation has been moved to chat.
          – terdon♦
          Nov 30 '17 at 11:05














        up vote
        13
        down vote



        accepted










        On a technical level, as someone who has had to handle these changes in a few large and popular system info tools, basically it comes down to this:



        For sources.list.d/



        # to add
        if [[ ! -e /etc/apt/sources.list.d/some_repo.list ]];then
        echo 'some repo line for apt' > /etc/apt/sources.list.d/some_repo.list
        fi

        # to delete
        if [[ -e /etc/apt/sources.list.d/some_repo.list ]];then
        rm -f /etc/apt/sources.list.d/some_repo.list
        fi


        Note that unless they are also doing the same check as below, if you had commented out a repo line, these tests would be wrong. If they are doing the same check as below, then it's the same exact complexity, except carried out over many files, not one. Also, unless they are checking ALL possible files, they can, and often do, add a duplicate item, which then makes apt complain, until you delete one of them.



        For sources.list



        # to add. Respect commented out lines. Bonus points for uncommenting
        # line instead of adding a new line
        if [[ -z $( grep -E 's*[^#]s*some repo line for apt' /etc/apt/sources.list ) ]];then
        echo 'some repo line for apt' >> /etc/apt/sources.list
        fi

        # to delete. Delete whether commented out or not. Bonus for not
        # deleting if commented out, thus respecting the user's wishes
        sed -i '/.*some repo line for apt.*/d' /etc/apt/sources.list


        The Google Chrome devs didn't check for the presence of Google Chrome sources, relying on the exact file name their Chrome package would create to be present. In all other cases, they would create a new file in sources.list.d named exactly the way they wanted.



        To see what sources you have, of course, it's not so pretty, since you can't get easier to read and maintain than:



        cat /etc/sources.list


        So this was basically done for the purpose of automated updates, and to provide easy single commands you could give to users, as far as I can tell. For users, it means that they have to read many files instead of 1 file to see if they have a repo added, and for apt, it means it has to read many files instead of one file as well.



        Since in the real world, if you were going to do this well, you have to support checks against all the files, regardless of what they are named, and then test if the action to be carried out is required or not required.



        However, if you were not going to do it well, you'd just ignore the checks to see if the item is somewhere in sources, and just check for the file name. I believe that's what most automated stuff does, but since in the end, I simply had to check everything so I could list it and act based on if one of those files matched, the only real result was making it a lot more complicated.



        Bulk Edits



        Given running many servers, I'd be tempted to just script a nightly job that loops through /etc/apt/sources.list.d/ and checks first to make sure the item is not in sources.list already, then if it is not, add that item to sources.list, delete the sources.list.d file, and if already in sources.list, just delete the sources.list.d file



        Since there is NO negative to using only sources.list beyond simplicity and ease of maintenance, adding something like that might not be a bad idea, particularly given creative random actions by sys admins.



        As noted in the above comment, inxi -r will neatly print out per file the active repos, but will not of course edit or alter them, so that would be only half the solution. If it's many distributions, it's a pain learning how each does it, that's for sure, and randomness certainly is the rule rather than the exception sadly.






        share|improve this answer






















        • Comments are not for extended discussion; this conversation has been moved to chat.
          – terdon♦
          Nov 30 '17 at 11:05












        up vote
        13
        down vote



        accepted







        up vote
        13
        down vote



        accepted






        On a technical level, as someone who has had to handle these changes in a few large and popular system info tools, basically it comes down to this:



        For sources.list.d/



        # to add
        if [[ ! -e /etc/apt/sources.list.d/some_repo.list ]];then
        echo 'some repo line for apt' > /etc/apt/sources.list.d/some_repo.list
        fi

        # to delete
        if [[ -e /etc/apt/sources.list.d/some_repo.list ]];then
        rm -f /etc/apt/sources.list.d/some_repo.list
        fi


        Note that unless they are also doing the same check as below, if you had commented out a repo line, these tests would be wrong. If they are doing the same check as below, then it's the same exact complexity, except carried out over many files, not one. Also, unless they are checking ALL possible files, they can, and often do, add a duplicate item, which then makes apt complain, until you delete one of them.



        For sources.list



        # to add. Respect commented out lines. Bonus points for uncommenting
        # line instead of adding a new line
        if [[ -z $( grep -E 's*[^#]s*some repo line for apt' /etc/apt/sources.list ) ]];then
        echo 'some repo line for apt' >> /etc/apt/sources.list
        fi

        # to delete. Delete whether commented out or not. Bonus for not
        # deleting if commented out, thus respecting the user's wishes
        sed -i '/.*some repo line for apt.*/d' /etc/apt/sources.list


        The Google Chrome devs didn't check for the presence of Google Chrome sources, relying on the exact file name their Chrome package would create to be present. In all other cases, they would create a new file in sources.list.d named exactly the way they wanted.



        To see what sources you have, of course, it's not so pretty, since you can't get easier to read and maintain than:



        cat /etc/sources.list


        So this was basically done for the purpose of automated updates, and to provide easy single commands you could give to users, as far as I can tell. For users, it means that they have to read many files instead of 1 file to see if they have a repo added, and for apt, it means it has to read many files instead of one file as well.



        Since in the real world, if you were going to do this well, you have to support checks against all the files, regardless of what they are named, and then test if the action to be carried out is required or not required.



        However, if you were not going to do it well, you'd just ignore the checks to see if the item is somewhere in sources, and just check for the file name. I believe that's what most automated stuff does, but since in the end, I simply had to check everything so I could list it and act based on if one of those files matched, the only real result was making it a lot more complicated.



        Bulk Edits



        Given running many servers, I'd be tempted to just script a nightly job that loops through /etc/apt/sources.list.d/ and checks first to make sure the item is not in sources.list already, then if it is not, add that item to sources.list, delete the sources.list.d file, and if already in sources.list, just delete the sources.list.d file



        Since there is NO negative to using only sources.list beyond simplicity and ease of maintenance, adding something like that might not be a bad idea, particularly given creative random actions by sys admins.



        As noted in the above comment, inxi -r will neatly print out per file the active repos, but will not of course edit or alter them, so that would be only half the solution. If it's many distributions, it's a pain learning how each does it, that's for sure, and randomness certainly is the rule rather than the exception sadly.






        share|improve this answer














        On a technical level, as someone who has had to handle these changes in a few large and popular system info tools, basically it comes down to this:



        For sources.list.d/



        # to add
        if [[ ! -e /etc/apt/sources.list.d/some_repo.list ]];then
        echo 'some repo line for apt' > /etc/apt/sources.list.d/some_repo.list
        fi

        # to delete
        if [[ -e /etc/apt/sources.list.d/some_repo.list ]];then
        rm -f /etc/apt/sources.list.d/some_repo.list
        fi


        Note that unless they are also doing the same check as below, if you had commented out a repo line, these tests would be wrong. If they are doing the same check as below, then it's the same exact complexity, except carried out over many files, not one. Also, unless they are checking ALL possible files, they can, and often do, add a duplicate item, which then makes apt complain, until you delete one of them.



        For sources.list



        # to add. Respect commented out lines. Bonus points for uncommenting
        # line instead of adding a new line
        if [[ -z $( grep -E 's*[^#]s*some repo line for apt' /etc/apt/sources.list ) ]];then
        echo 'some repo line for apt' >> /etc/apt/sources.list
        fi

        # to delete. Delete whether commented out or not. Bonus for not
        # deleting if commented out, thus respecting the user's wishes
        sed -i '/.*some repo line for apt.*/d' /etc/apt/sources.list


        The Google Chrome devs didn't check for the presence of Google Chrome sources, relying on the exact file name their Chrome package would create to be present. In all other cases, they would create a new file in sources.list.d named exactly the way they wanted.



        To see what sources you have, of course, it's not so pretty, since you can't get easier to read and maintain than:



        cat /etc/sources.list


        So this was basically done for the purpose of automated updates, and to provide easy single commands you could give to users, as far as I can tell. For users, it means that they have to read many files instead of 1 file to see if they have a repo added, and for apt, it means it has to read many files instead of one file as well.



        Since in the real world, if you were going to do this well, you have to support checks against all the files, regardless of what they are named, and then test if the action to be carried out is required or not required.



        However, if you were not going to do it well, you'd just ignore the checks to see if the item is somewhere in sources, and just check for the file name. I believe that's what most automated stuff does, but since in the end, I simply had to check everything so I could list it and act based on if one of those files matched, the only real result was making it a lot more complicated.



        Bulk Edits



        Given running many servers, I'd be tempted to just script a nightly job that loops through /etc/apt/sources.list.d/ and checks first to make sure the item is not in sources.list already, then if it is not, add that item to sources.list, delete the sources.list.d file, and if already in sources.list, just delete the sources.list.d file



        Since there is NO negative to using only sources.list beyond simplicity and ease of maintenance, adding something like that might not be a bad idea, particularly given creative random actions by sys admins.



        As noted in the above comment, inxi -r will neatly print out per file the active repos, but will not of course edit or alter them, so that would be only half the solution. If it's many distributions, it's a pain learning how each does it, that's for sure, and randomness certainly is the rule rather than the exception sadly.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 28 '17 at 3:32









        muru

        33.5k577144




        33.5k577144










        answered Nov 27 '17 at 18:02









        Lizardx

        1,581410




        1,581410











        • Comments are not for extended discussion; this conversation has been moved to chat.
          – terdon♦
          Nov 30 '17 at 11:05
















        • Comments are not for extended discussion; this conversation has been moved to chat.
          – terdon♦
          Nov 30 '17 at 11:05















        Comments are not for extended discussion; this conversation has been moved to chat.
        – terdon♦
        Nov 30 '17 at 11:05




        Comments are not for extended discussion; this conversation has been moved to chat.
        – terdon♦
        Nov 30 '17 at 11:05












        up vote
        37
        down vote













        Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically:



        • It allows new installations that need their own repos to not have to
          search a flat file to ensure that it is not adding duplicate entries.

        • It allows a system administrator to easily disable (by renaming) or
          remove (by deleting) a repository set without having to edit a
          monolithic file.

        • It allows a package maintainer to give a simple
          command to update repository locations without having to worry about
          inadvertently changing the configuration for unrelated repositories.





        share|improve this answer
















        • 11




          This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
          – Nemo
          Nov 28 '17 at 0:15






        • 12




          Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
          – Martijn Heemels
          Nov 28 '17 at 1:40






        • 2




          @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
          – smitelli
          Nov 28 '17 at 2:45






        • 3




          @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
          – DopeGhoti
          Nov 28 '17 at 4:08






        • 7




          Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
          – viraptor
          Nov 28 '17 at 4:28















        up vote
        37
        down vote













        Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically:



        • It allows new installations that need their own repos to not have to
          search a flat file to ensure that it is not adding duplicate entries.

        • It allows a system administrator to easily disable (by renaming) or
          remove (by deleting) a repository set without having to edit a
          monolithic file.

        • It allows a package maintainer to give a simple
          command to update repository locations without having to worry about
          inadvertently changing the configuration for unrelated repositories.





        share|improve this answer
















        • 11




          This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
          – Nemo
          Nov 28 '17 at 0:15






        • 12




          Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
          – Martijn Heemels
          Nov 28 '17 at 1:40






        • 2




          @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
          – smitelli
          Nov 28 '17 at 2:45






        • 3




          @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
          – DopeGhoti
          Nov 28 '17 at 4:08






        • 7




          Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
          – viraptor
          Nov 28 '17 at 4:28













        up vote
        37
        down vote










        up vote
        37
        down vote









        Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically:



        • It allows new installations that need their own repos to not have to
          search a flat file to ensure that it is not adding duplicate entries.

        • It allows a system administrator to easily disable (by renaming) or
          remove (by deleting) a repository set without having to edit a
          monolithic file.

        • It allows a package maintainer to give a simple
          command to update repository locations without having to worry about
          inadvertently changing the configuration for unrelated repositories.





        share|improve this answer












        Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically:



        • It allows new installations that need their own repos to not have to
          search a flat file to ensure that it is not adding duplicate entries.

        • It allows a system administrator to easily disable (by renaming) or
          remove (by deleting) a repository set without having to edit a
          monolithic file.

        • It allows a package maintainer to give a simple
          command to update repository locations without having to worry about
          inadvertently changing the configuration for unrelated repositories.






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 27 '17 at 17:12









        DopeGhoti

        40.6k54979




        40.6k54979







        • 11




          This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
          – Nemo
          Nov 28 '17 at 0:15






        • 12




          Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
          – Martijn Heemels
          Nov 28 '17 at 1:40






        • 2




          @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
          – smitelli
          Nov 28 '17 at 2:45






        • 3




          @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
          – DopeGhoti
          Nov 28 '17 at 4:08






        • 7




          Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
          – viraptor
          Nov 28 '17 at 4:28













        • 11




          This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
          – Nemo
          Nov 28 '17 at 0:15






        • 12




          Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
          – Martijn Heemels
          Nov 28 '17 at 1:40






        • 2




          @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
          – smitelli
          Nov 28 '17 at 2:45






        • 3




          @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
          – DopeGhoti
          Nov 28 '17 at 4:08






        • 7




          Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
          – viraptor
          Nov 28 '17 at 4:28








        11




        11




        This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
        – Nemo
        Nov 28 '17 at 0:15




        This is better than the accepted answer... The key concept is "ownership". The .d design clearly separates configuration state owned by different entities. One might be owned by a package. Another might be installed via wget .... With a single monster file, how does any automated or semi-automated procedure "know" which piece of the config it owns? It doesn't, which is why the .d design is superior.
        – Nemo
        Nov 28 '17 at 0:15




        12




        12




        Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
        – Martijn Heemels
        Nov 28 '17 at 1:40




        Not sure about 'by hand', but I haven't done that for years. It benefits programmatic management. When using config management software like Puppet it's easier to just drop or remove a file in dir and run apt update, instead of parsing a file to add or remove lines. Especially since that avoids having to manage a single 'file' resource from multiple independent modules. I appreciate Ubuntu's broad use of ".d" dirs for this reason.
        – Martijn Heemels
        Nov 28 '17 at 1:40




        2




        2




        @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
        – smitelli
        Nov 28 '17 at 2:45




        @MartijnHeemels I would upvote your comment a hundred times if I could. For me personally, the benefits of the .d design snapped immediately into focus once I started doing heavy Puppet/Salt configuration management.
        – smitelli
        Nov 28 '17 at 2:45




        3




        3




        @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
        – DopeGhoti
        Nov 28 '17 at 4:08




        @thecarpy, if your admins try to fool you, you should find more trustworthy admins. Calling what I (or indeed anyone) write(s) "utter rubbish" is, at best, rude.
        – DopeGhoti
        Nov 28 '17 at 4:08




        7




        7




        Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
        – viraptor
        Nov 28 '17 at 4:28





        Confirming this from ops perspective. Having whole files provisioned and owned either by specific packages, or by modules of your config management system is much cleaner than trying to write a parser on the fly for each application you configure. It may seem trivial just for apt, but then you get a number of other systems which can use the same strategy (logrotate, cron, sysctl, sudoers, rsyslog, modprobe, ... load configs from service.d/* files) Deploying files rather than modifying existing ones is also better for image caching/comparison.
        – viraptor
        Nov 28 '17 at 4:28











        up vote
        10
        down vote













        If you're manually managing your servers I'll agree it makes things more confusing. However, it benefits programmatic management (i.e. "configuration as code"). When using config management software like Puppet, Ansible, Chef, etc., it's easier to just drop or remove a file in a dir and run apt update, instead of parsing a file to add or remove certain lines.



        Especially since that avoids having to manage the contents of a single 'file' resource, e.g: /etc/apt/sources.list, from multiple independent modules that have been written by third parties.



        I appreciate Ubuntu's broad use of ".d" dirs for this particular reason, i.e. sudoers.d, rsyslog.d, sysctl.d., cron.d, logrotate.d, etc.






        share|improve this answer


























          up vote
          10
          down vote













          If you're manually managing your servers I'll agree it makes things more confusing. However, it benefits programmatic management (i.e. "configuration as code"). When using config management software like Puppet, Ansible, Chef, etc., it's easier to just drop or remove a file in a dir and run apt update, instead of parsing a file to add or remove certain lines.



          Especially since that avoids having to manage the contents of a single 'file' resource, e.g: /etc/apt/sources.list, from multiple independent modules that have been written by third parties.



          I appreciate Ubuntu's broad use of ".d" dirs for this particular reason, i.e. sudoers.d, rsyslog.d, sysctl.d., cron.d, logrotate.d, etc.






          share|improve this answer
























            up vote
            10
            down vote










            up vote
            10
            down vote









            If you're manually managing your servers I'll agree it makes things more confusing. However, it benefits programmatic management (i.e. "configuration as code"). When using config management software like Puppet, Ansible, Chef, etc., it's easier to just drop or remove a file in a dir and run apt update, instead of parsing a file to add or remove certain lines.



            Especially since that avoids having to manage the contents of a single 'file' resource, e.g: /etc/apt/sources.list, from multiple independent modules that have been written by third parties.



            I appreciate Ubuntu's broad use of ".d" dirs for this particular reason, i.e. sudoers.d, rsyslog.d, sysctl.d., cron.d, logrotate.d, etc.






            share|improve this answer














            If you're manually managing your servers I'll agree it makes things more confusing. However, it benefits programmatic management (i.e. "configuration as code"). When using config management software like Puppet, Ansible, Chef, etc., it's easier to just drop or remove a file in a dir and run apt update, instead of parsing a file to add or remove certain lines.



            Especially since that avoids having to manage the contents of a single 'file' resource, e.g: /etc/apt/sources.list, from multiple independent modules that have been written by third parties.



            I appreciate Ubuntu's broad use of ".d" dirs for this particular reason, i.e. sudoers.d, rsyslog.d, sysctl.d., cron.d, logrotate.d, etc.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 29 '17 at 10:07









            GAD3R

            22.6k154895




            22.6k154895










            answered Nov 28 '17 at 9:12









            Martijn Heemels

            20115




            20115




















                up vote
                5
                down vote













                As nemo pointed out in a comment, one of the key advantages of a directory is it allows for the notion of "ownership".



                Modern Linux distributions and installers are all based around the idea of packages - independent pieces of software which can, as much as possible, be added and removed atomically. Whenever you install a package with dpkg (and therefore apt), it keeps track of which files on the system were created by that installer. Uninstalling the package can then largely consist of deleting those files.



                The currently accepted answer takes it as a bad thing that a Google Chrome installer assumed that it should only create or delete an entry in the location it expected, but automating anything else leads to all sorts of horrible edge cases; for instance:



                • If the line exists in sources.list when installing, but is commented out, should the installer uncomment it, or add a duplicate?

                • If the uninstaller removes the line, but the user has added or edited comments next to it, the file will be left with broken commentary.

                • If the user manually added the line, the installer could know not to add it; but how would the uninstaller know not to remove it?

                Separate files are not required for ownership; for instance, the installer could include a block of comments stating that it "owns" a particular set of lines. In that case, it would always search the file for that exact block, not for some other mention of the same repository.



                All else being equal, automating edits to a single configuration file will always be more complicated than automating creation and deletion of a separate file. At the very least, removing lines requires use of some pattern-matching tool such as sed. In a more complex file, both adding and removing lines might require a scripting tool with knowledge of the file format, to add to an appropriate section, or remove without damaging surrounding formatting.



                Since an installer would need to avoid messing with manually edited configuration anyway, it makes sense to put automated, tool-owned, configuration in a format that is easy for automated tools to manage.






                share|improve this answer


























                  up vote
                  5
                  down vote













                  As nemo pointed out in a comment, one of the key advantages of a directory is it allows for the notion of "ownership".



                  Modern Linux distributions and installers are all based around the idea of packages - independent pieces of software which can, as much as possible, be added and removed atomically. Whenever you install a package with dpkg (and therefore apt), it keeps track of which files on the system were created by that installer. Uninstalling the package can then largely consist of deleting those files.



                  The currently accepted answer takes it as a bad thing that a Google Chrome installer assumed that it should only create or delete an entry in the location it expected, but automating anything else leads to all sorts of horrible edge cases; for instance:



                  • If the line exists in sources.list when installing, but is commented out, should the installer uncomment it, or add a duplicate?

                  • If the uninstaller removes the line, but the user has added or edited comments next to it, the file will be left with broken commentary.

                  • If the user manually added the line, the installer could know not to add it; but how would the uninstaller know not to remove it?

                  Separate files are not required for ownership; for instance, the installer could include a block of comments stating that it "owns" a particular set of lines. In that case, it would always search the file for that exact block, not for some other mention of the same repository.



                  All else being equal, automating edits to a single configuration file will always be more complicated than automating creation and deletion of a separate file. At the very least, removing lines requires use of some pattern-matching tool such as sed. In a more complex file, both adding and removing lines might require a scripting tool with knowledge of the file format, to add to an appropriate section, or remove without damaging surrounding formatting.



                  Since an installer would need to avoid messing with manually edited configuration anyway, it makes sense to put automated, tool-owned, configuration in a format that is easy for automated tools to manage.






                  share|improve this answer
























                    up vote
                    5
                    down vote










                    up vote
                    5
                    down vote









                    As nemo pointed out in a comment, one of the key advantages of a directory is it allows for the notion of "ownership".



                    Modern Linux distributions and installers are all based around the idea of packages - independent pieces of software which can, as much as possible, be added and removed atomically. Whenever you install a package with dpkg (and therefore apt), it keeps track of which files on the system were created by that installer. Uninstalling the package can then largely consist of deleting those files.



                    The currently accepted answer takes it as a bad thing that a Google Chrome installer assumed that it should only create or delete an entry in the location it expected, but automating anything else leads to all sorts of horrible edge cases; for instance:



                    • If the line exists in sources.list when installing, but is commented out, should the installer uncomment it, or add a duplicate?

                    • If the uninstaller removes the line, but the user has added or edited comments next to it, the file will be left with broken commentary.

                    • If the user manually added the line, the installer could know not to add it; but how would the uninstaller know not to remove it?

                    Separate files are not required for ownership; for instance, the installer could include a block of comments stating that it "owns" a particular set of lines. In that case, it would always search the file for that exact block, not for some other mention of the same repository.



                    All else being equal, automating edits to a single configuration file will always be more complicated than automating creation and deletion of a separate file. At the very least, removing lines requires use of some pattern-matching tool such as sed. In a more complex file, both adding and removing lines might require a scripting tool with knowledge of the file format, to add to an appropriate section, or remove without damaging surrounding formatting.



                    Since an installer would need to avoid messing with manually edited configuration anyway, it makes sense to put automated, tool-owned, configuration in a format that is easy for automated tools to manage.






                    share|improve this answer














                    As nemo pointed out in a comment, one of the key advantages of a directory is it allows for the notion of "ownership".



                    Modern Linux distributions and installers are all based around the idea of packages - independent pieces of software which can, as much as possible, be added and removed atomically. Whenever you install a package with dpkg (and therefore apt), it keeps track of which files on the system were created by that installer. Uninstalling the package can then largely consist of deleting those files.



                    The currently accepted answer takes it as a bad thing that a Google Chrome installer assumed that it should only create or delete an entry in the location it expected, but automating anything else leads to all sorts of horrible edge cases; for instance:



                    • If the line exists in sources.list when installing, but is commented out, should the installer uncomment it, or add a duplicate?

                    • If the uninstaller removes the line, but the user has added or edited comments next to it, the file will be left with broken commentary.

                    • If the user manually added the line, the installer could know not to add it; but how would the uninstaller know not to remove it?

                    Separate files are not required for ownership; for instance, the installer could include a block of comments stating that it "owns" a particular set of lines. In that case, it would always search the file for that exact block, not for some other mention of the same repository.



                    All else being equal, automating edits to a single configuration file will always be more complicated than automating creation and deletion of a separate file. At the very least, removing lines requires use of some pattern-matching tool such as sed. In a more complex file, both adding and removing lines might require a scripting tool with knowledge of the file format, to add to an appropriate section, or remove without damaging surrounding formatting.



                    Since an installer would need to avoid messing with manually edited configuration anyway, it makes sense to put automated, tool-owned, configuration in a format that is easy for automated tools to manage.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Nov 28 '17 at 18:48

























                    answered Nov 28 '17 at 18:35









                    IMSoP

                    33317




                    33317




















                        up vote
                        3
                        down vote













                        This allows packages to add extra sources without resorting to scripts.



                        For example, when you install Microsoft's Skype package, a source for skype.com is automatically configured to download updates; removing the Skype package from the system also disables this package source again.



                        If you wanted to have the same effect with a flat file, then the installation scripts for Skype would need to modify your sources.list, which probably a lot of system administrators would find slightly unnerving.






                        share|improve this answer
























                          up vote
                          3
                          down vote













                          This allows packages to add extra sources without resorting to scripts.



                          For example, when you install Microsoft's Skype package, a source for skype.com is automatically configured to download updates; removing the Skype package from the system also disables this package source again.



                          If you wanted to have the same effect with a flat file, then the installation scripts for Skype would need to modify your sources.list, which probably a lot of system administrators would find slightly unnerving.






                          share|improve this answer






















                            up vote
                            3
                            down vote










                            up vote
                            3
                            down vote









                            This allows packages to add extra sources without resorting to scripts.



                            For example, when you install Microsoft's Skype package, a source for skype.com is automatically configured to download updates; removing the Skype package from the system also disables this package source again.



                            If you wanted to have the same effect with a flat file, then the installation scripts for Skype would need to modify your sources.list, which probably a lot of system administrators would find slightly unnerving.






                            share|improve this answer












                            This allows packages to add extra sources without resorting to scripts.



                            For example, when you install Microsoft's Skype package, a source for skype.com is automatically configured to download updates; removing the Skype package from the system also disables this package source again.



                            If you wanted to have the same effect with a flat file, then the installation scripts for Skype would need to modify your sources.list, which probably a lot of system administrators would find slightly unnerving.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Nov 28 '17 at 18:37









                            Simon Richter

                            2,3291112




                            2,3291112




















                                up vote
                                -3
                                down vote













                                I'm not convinced that there is a good reason - other than it seems fashionable. To me, it breaks a rule that a directory should either be a leaf or a node - ie that it should contain only files or directories, not a mixture of both.



                                I suppose that it does make files smaller, so easier to read - in the case for instance of sudo rules, which can be quite long, it does make it easier to have a standardized set of rules for one type of user (say a developer), and add those to the config directory if devs should be allowed to sudo on this machine; thus you need to maintain fewer files - just a file for devs, for admins, for sysops, etc, rather than for every possible combination thereof.



                                There, I've contradicted myself.






                                share|improve this answer
















                                • 3




                                  I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
                                  – smitelli
                                  Nov 28 '17 at 3:00











                                • I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
                                  – Graham Nicholls
                                  Nov 28 '17 at 10:28







                                • 2




                                  There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
                                  – reinierpost
                                  Nov 28 '17 at 10:41










                                • Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
                                  – Graham Nicholls
                                  Dec 1 '17 at 19:37














                                up vote
                                -3
                                down vote













                                I'm not convinced that there is a good reason - other than it seems fashionable. To me, it breaks a rule that a directory should either be a leaf or a node - ie that it should contain only files or directories, not a mixture of both.



                                I suppose that it does make files smaller, so easier to read - in the case for instance of sudo rules, which can be quite long, it does make it easier to have a standardized set of rules for one type of user (say a developer), and add those to the config directory if devs should be allowed to sudo on this machine; thus you need to maintain fewer files - just a file for devs, for admins, for sysops, etc, rather than for every possible combination thereof.



                                There, I've contradicted myself.






                                share|improve this answer
















                                • 3




                                  I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
                                  – smitelli
                                  Nov 28 '17 at 3:00











                                • I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
                                  – Graham Nicholls
                                  Nov 28 '17 at 10:28







                                • 2




                                  There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
                                  – reinierpost
                                  Nov 28 '17 at 10:41










                                • Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
                                  – Graham Nicholls
                                  Dec 1 '17 at 19:37












                                up vote
                                -3
                                down vote










                                up vote
                                -3
                                down vote









                                I'm not convinced that there is a good reason - other than it seems fashionable. To me, it breaks a rule that a directory should either be a leaf or a node - ie that it should contain only files or directories, not a mixture of both.



                                I suppose that it does make files smaller, so easier to read - in the case for instance of sudo rules, which can be quite long, it does make it easier to have a standardized set of rules for one type of user (say a developer), and add those to the config directory if devs should be allowed to sudo on this machine; thus you need to maintain fewer files - just a file for devs, for admins, for sysops, etc, rather than for every possible combination thereof.



                                There, I've contradicted myself.






                                share|improve this answer












                                I'm not convinced that there is a good reason - other than it seems fashionable. To me, it breaks a rule that a directory should either be a leaf or a node - ie that it should contain only files or directories, not a mixture of both.



                                I suppose that it does make files smaller, so easier to read - in the case for instance of sudo rules, which can be quite long, it does make it easier to have a standardized set of rules for one type of user (say a developer), and add those to the config directory if devs should be allowed to sudo on this machine; thus you need to maintain fewer files - just a file for devs, for admins, for sysops, etc, rather than for every possible combination thereof.



                                There, I've contradicted myself.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Nov 27 '17 at 21:52









                                Graham Nicholls

                                15110




                                15110







                                • 3




                                  I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
                                  – smitelli
                                  Nov 28 '17 at 3:00











                                • I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
                                  – Graham Nicholls
                                  Nov 28 '17 at 10:28







                                • 2




                                  There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
                                  – reinierpost
                                  Nov 28 '17 at 10:41










                                • Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
                                  – Graham Nicholls
                                  Dec 1 '17 at 19:37












                                • 3




                                  I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
                                  – smitelli
                                  Nov 28 '17 at 3:00











                                • I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
                                  – Graham Nicholls
                                  Nov 28 '17 at 10:28







                                • 2




                                  There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
                                  – reinierpost
                                  Nov 28 '17 at 10:41










                                • Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
                                  – Graham Nicholls
                                  Dec 1 '17 at 19:37







                                3




                                3




                                I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
                                – smitelli
                                Nov 28 '17 at 3:00





                                I wouldn't take "a directory should either be a leaf or a node" as a rule. As a contrived example, look at /var/log. A simple daemon might write one sole file directly inside: /var/log/simple.log. A more complex daemon might need its own subdirectory: /var/log/complex/a.log, /var/log/complex/b.log, /var/log/complex/c.log... Similar pattern with configs.
                                – smitelli
                                Nov 28 '17 at 3:00













                                I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
                                – Graham Nicholls
                                Nov 28 '17 at 10:28





                                I'd fashion that as /var/log/simple/log.1 .2, etc. The path gives you info. SO var log would contain subdirs for each log type, and each subdir could have one or many files in it. I admit, there are examples where exceptions are reasonable, but as a general rule, it's good. I hate seeing home directories with files in - evidence, IMO of disorganisation.
                                – Graham Nicholls
                                Nov 28 '17 at 10:28





                                2




                                2




                                There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
                                – reinierpost
                                Nov 28 '17 at 10:41




                                There is a good reason, but you need to think as an admin before you understand it. See DopeGhoti's answer.
                                – reinierpost
                                Nov 28 '17 at 10:41












                                Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
                                – Graham Nicholls
                                Dec 1 '17 at 19:37




                                Well, that put me in my place, didn't it? obviously I can't think as an admin - or I simply disagree with you.
                                – Graham Nicholls
                                Dec 1 '17 at 19:37

















                                 

                                draft saved


                                draft discarded















































                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f407328%2fwhat-is-the-benefit-of-etc-apt-sources-list-d-over-etc-apt-sources-list%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Bahrain

                                Postfix configuration issue with fips on centos 7; mailgun relay