hardlink/softlink multiple file to one file

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite












I have many files in a folder. I want to concatenate all these files to a single file. For example cat * > final_file; But this will increase disk space and also will consume time. Is there is a way where I can hardlink/softlink all the files to final_file? For example ln * final_file.










share|improve this question



















  • 1




    Related: Virtual file made out of smaller ones (for mac-like sparse bundle solution) and How to split a ddrescue disk image and how to use it again?
    – Stéphane Chazelas
    Jul 31 '13 at 20:33











  • You could use FUSE for this task. I've created a simple example on how to accomplish this: cat-fuse.
    – Jakub Nowak
    Aug 19 at 19:35














up vote
3
down vote

favorite












I have many files in a folder. I want to concatenate all these files to a single file. For example cat * > final_file; But this will increase disk space and also will consume time. Is there is a way where I can hardlink/softlink all the files to final_file? For example ln * final_file.










share|improve this question



















  • 1




    Related: Virtual file made out of smaller ones (for mac-like sparse bundle solution) and How to split a ddrescue disk image and how to use it again?
    – Stéphane Chazelas
    Jul 31 '13 at 20:33











  • You could use FUSE for this task. I've created a simple example on how to accomplish this: cat-fuse.
    – Jakub Nowak
    Aug 19 at 19:35












up vote
3
down vote

favorite









up vote
3
down vote

favorite











I have many files in a folder. I want to concatenate all these files to a single file. For example cat * > final_file; But this will increase disk space and also will consume time. Is there is a way where I can hardlink/softlink all the files to final_file? For example ln * final_file.










share|improve this question















I have many files in a folder. I want to concatenate all these files to a single file. For example cat * > final_file; But this will increase disk space and also will consume time. Is there is a way where I can hardlink/softlink all the files to final_file? For example ln * final_file.







symlink cat hard-link






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 31 '13 at 22:55









Gilles

510k12010081537




510k12010081537










asked Jul 31 '13 at 19:59









quartz

11814




11814







  • 1




    Related: Virtual file made out of smaller ones (for mac-like sparse bundle solution) and How to split a ddrescue disk image and how to use it again?
    – Stéphane Chazelas
    Jul 31 '13 at 20:33











  • You could use FUSE for this task. I've created a simple example on how to accomplish this: cat-fuse.
    – Jakub Nowak
    Aug 19 at 19:35












  • 1




    Related: Virtual file made out of smaller ones (for mac-like sparse bundle solution) and How to split a ddrescue disk image and how to use it again?
    – Stéphane Chazelas
    Jul 31 '13 at 20:33











  • You could use FUSE for this task. I've created a simple example on how to accomplish this: cat-fuse.
    – Jakub Nowak
    Aug 19 at 19:35







1




1




Related: Virtual file made out of smaller ones (for mac-like sparse bundle solution) and How to split a ddrescue disk image and how to use it again?
– Stéphane Chazelas
Jul 31 '13 at 20:33





Related: Virtual file made out of smaller ones (for mac-like sparse bundle solution) and How to split a ddrescue disk image and how to use it again?
– Stéphane Chazelas
Jul 31 '13 at 20:33













You could use FUSE for this task. I've created a simple example on how to accomplish this: cat-fuse.
– Jakub Nowak
Aug 19 at 19:35




You could use FUSE for this task. I've created a simple example on how to accomplish this: cat-fuse.
– Jakub Nowak
Aug 19 at 19:35










3 Answers
3






active

oldest

votes

















up vote
5
down vote



accepted










With links, I'm afraid, this will not be possible. However, you could use a
named pipe. Example:



# create some dummy files
echo alpha >a
echo beta >b
echo gamma >c

# create named pipe
mkfifo allfiles

# concatenate files into pipe
cat a b c >allfiles


The last call will block until some process reads from the pipe and then exit. For a continuous operation one can use a loop, which waits for a process to read and starts over again.



while true; do
cat a b c >allfiles
done





share|improve this answer






















  • Wouldn't cat block until a process starts reading from the named pipe?
    – Joseph R.
    Jul 31 '13 at 20:10










  • @JosephR. Yes, it would. Append & if that's not desired.
    – Marco
    Jul 31 '13 at 20:14






  • 1




    Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
    – Joseph R.
    Jul 31 '13 at 20:15










  • Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
    – Marco
    Jul 31 '13 at 20:18










  • Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
    – Joseph R.
    Jul 31 '13 at 20:19

















up vote
2
down vote













This is not possible.



N files mean N inodes. Hard links, by definition, are simply different names for the same inode. Symlinks are files that point to a certain inode (their target). Either way, soft or hard, the link can refer to a single inode.






share|improve this answer



























    up vote
    2
    down vote













    In a straight way, no ...
    You cannot hard/soft link to a single file. links are nothing more and nothing less than pointer from one file to another.



    Now if you are worried about space and want to release the space you can do the following:



    for i in *
    do
    cat < "$i" >> destination_file &&
    rm -f -- "$i"
    done


    Basically, it will append the output to destination_file and remove the file afterwards. Also I'm assuming you don't need the original files.






    share|improve this answer


















    • 1




      Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
      – Marco
      Jul 31 '13 at 20:10











    • Why not quote the variable ("$i") to allow for spaces in the file name?
      – Joseph R.
      Jul 31 '13 at 20:13










    • @JosephR. That doesn't help. If you have special characters it'll break.
      – Marco
      Jul 31 '13 at 20:16






    • 1




      @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
      – BitsOfNix
      Jul 31 '13 at 20:36






    • 1




      @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
      – Marco
      Jul 31 '13 at 20:41










    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f85097%2fhardlink-softlink-multiple-file-to-one-file%23new-answer', 'question_page');

    );

    Post as a guest






























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    5
    down vote



    accepted










    With links, I'm afraid, this will not be possible. However, you could use a
    named pipe. Example:



    # create some dummy files
    echo alpha >a
    echo beta >b
    echo gamma >c

    # create named pipe
    mkfifo allfiles

    # concatenate files into pipe
    cat a b c >allfiles


    The last call will block until some process reads from the pipe and then exit. For a continuous operation one can use a loop, which waits for a process to read and starts over again.



    while true; do
    cat a b c >allfiles
    done





    share|improve this answer






















    • Wouldn't cat block until a process starts reading from the named pipe?
      – Joseph R.
      Jul 31 '13 at 20:10










    • @JosephR. Yes, it would. Append & if that's not desired.
      – Marco
      Jul 31 '13 at 20:14






    • 1




      Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
      – Joseph R.
      Jul 31 '13 at 20:15










    • Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
      – Marco
      Jul 31 '13 at 20:18










    • Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
      – Joseph R.
      Jul 31 '13 at 20:19














    up vote
    5
    down vote



    accepted










    With links, I'm afraid, this will not be possible. However, you could use a
    named pipe. Example:



    # create some dummy files
    echo alpha >a
    echo beta >b
    echo gamma >c

    # create named pipe
    mkfifo allfiles

    # concatenate files into pipe
    cat a b c >allfiles


    The last call will block until some process reads from the pipe and then exit. For a continuous operation one can use a loop, which waits for a process to read and starts over again.



    while true; do
    cat a b c >allfiles
    done





    share|improve this answer






















    • Wouldn't cat block until a process starts reading from the named pipe?
      – Joseph R.
      Jul 31 '13 at 20:10










    • @JosephR. Yes, it would. Append & if that's not desired.
      – Marco
      Jul 31 '13 at 20:14






    • 1




      Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
      – Joseph R.
      Jul 31 '13 at 20:15










    • Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
      – Marco
      Jul 31 '13 at 20:18










    • Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
      – Joseph R.
      Jul 31 '13 at 20:19












    up vote
    5
    down vote



    accepted







    up vote
    5
    down vote



    accepted






    With links, I'm afraid, this will not be possible. However, you could use a
    named pipe. Example:



    # create some dummy files
    echo alpha >a
    echo beta >b
    echo gamma >c

    # create named pipe
    mkfifo allfiles

    # concatenate files into pipe
    cat a b c >allfiles


    The last call will block until some process reads from the pipe and then exit. For a continuous operation one can use a loop, which waits for a process to read and starts over again.



    while true; do
    cat a b c >allfiles
    done





    share|improve this answer














    With links, I'm afraid, this will not be possible. However, you could use a
    named pipe. Example:



    # create some dummy files
    echo alpha >a
    echo beta >b
    echo gamma >c

    # create named pipe
    mkfifo allfiles

    # concatenate files into pipe
    cat a b c >allfiles


    The last call will block until some process reads from the pipe and then exit. For a continuous operation one can use a loop, which waits for a process to read and starts over again.



    while true; do
    cat a b c >allfiles
    done






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jul 31 '13 at 20:21

























    answered Jul 31 '13 at 20:09









    Marco

    24.4k580113




    24.4k580113











    • Wouldn't cat block until a process starts reading from the named pipe?
      – Joseph R.
      Jul 31 '13 at 20:10










    • @JosephR. Yes, it would. Append & if that's not desired.
      – Marco
      Jul 31 '13 at 20:14






    • 1




      Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
      – Joseph R.
      Jul 31 '13 at 20:15










    • Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
      – Marco
      Jul 31 '13 at 20:18










    • Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
      – Joseph R.
      Jul 31 '13 at 20:19
















    • Wouldn't cat block until a process starts reading from the named pipe?
      – Joseph R.
      Jul 31 '13 at 20:10










    • @JosephR. Yes, it would. Append & if that's not desired.
      – Marco
      Jul 31 '13 at 20:14






    • 1




      Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
      – Joseph R.
      Jul 31 '13 at 20:15










    • Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
      – Marco
      Jul 31 '13 at 20:18










    • Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
      – Joseph R.
      Jul 31 '13 at 20:19















    Wouldn't cat block until a process starts reading from the named pipe?
    – Joseph R.
    Jul 31 '13 at 20:10




    Wouldn't cat block until a process starts reading from the named pipe?
    – Joseph R.
    Jul 31 '13 at 20:10












    @JosephR. Yes, it would. Append & if that's not desired.
    – Marco
    Jul 31 '13 at 20:14




    @JosephR. Yes, it would. Append & if that's not desired.
    – Marco
    Jul 31 '13 at 20:14




    1




    1




    Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
    – Joseph R.
    Jul 31 '13 at 20:15




    Great. But wouldn't the OP need to repeat these steps every time he/she wants to access the full content via allfiles?
    – Joseph R.
    Jul 31 '13 at 20:15












    Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
    – Marco
    Jul 31 '13 at 20:18




    Indeed. Use while true; do cat a b c >allfiles; done if that's not desired.
    – Marco
    Jul 31 '13 at 20:18












    Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
    – Joseph R.
    Jul 31 '13 at 20:19




    Thanks for bearing with me (I'm still grappling with the fifo concept). I think your notes should be added to the answer as the OP might not be aware of that.
    – Joseph R.
    Jul 31 '13 at 20:19












    up vote
    2
    down vote













    This is not possible.



    N files mean N inodes. Hard links, by definition, are simply different names for the same inode. Symlinks are files that point to a certain inode (their target). Either way, soft or hard, the link can refer to a single inode.






    share|improve this answer
























      up vote
      2
      down vote













      This is not possible.



      N files mean N inodes. Hard links, by definition, are simply different names for the same inode. Symlinks are files that point to a certain inode (their target). Either way, soft or hard, the link can refer to a single inode.






      share|improve this answer






















        up vote
        2
        down vote










        up vote
        2
        down vote









        This is not possible.



        N files mean N inodes. Hard links, by definition, are simply different names for the same inode. Symlinks are files that point to a certain inode (their target). Either way, soft or hard, the link can refer to a single inode.






        share|improve this answer












        This is not possible.



        N files mean N inodes. Hard links, by definition, are simply different names for the same inode. Symlinks are files that point to a certain inode (their target). Either way, soft or hard, the link can refer to a single inode.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jul 31 '13 at 20:05









        Joseph R.

        27.2k368111




        27.2k368111




















            up vote
            2
            down vote













            In a straight way, no ...
            You cannot hard/soft link to a single file. links are nothing more and nothing less than pointer from one file to another.



            Now if you are worried about space and want to release the space you can do the following:



            for i in *
            do
            cat < "$i" >> destination_file &&
            rm -f -- "$i"
            done


            Basically, it will append the output to destination_file and remove the file afterwards. Also I'm assuming you don't need the original files.






            share|improve this answer


















            • 1




              Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
              – Marco
              Jul 31 '13 at 20:10











            • Why not quote the variable ("$i") to allow for spaces in the file name?
              – Joseph R.
              Jul 31 '13 at 20:13










            • @JosephR. That doesn't help. If you have special characters it'll break.
              – Marco
              Jul 31 '13 at 20:16






            • 1




              @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
              – BitsOfNix
              Jul 31 '13 at 20:36






            • 1




              @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
              – Marco
              Jul 31 '13 at 20:41














            up vote
            2
            down vote













            In a straight way, no ...
            You cannot hard/soft link to a single file. links are nothing more and nothing less than pointer from one file to another.



            Now if you are worried about space and want to release the space you can do the following:



            for i in *
            do
            cat < "$i" >> destination_file &&
            rm -f -- "$i"
            done


            Basically, it will append the output to destination_file and remove the file afterwards. Also I'm assuming you don't need the original files.






            share|improve this answer


















            • 1




              Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
              – Marco
              Jul 31 '13 at 20:10











            • Why not quote the variable ("$i") to allow for spaces in the file name?
              – Joseph R.
              Jul 31 '13 at 20:13










            • @JosephR. That doesn't help. If you have special characters it'll break.
              – Marco
              Jul 31 '13 at 20:16






            • 1




              @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
              – BitsOfNix
              Jul 31 '13 at 20:36






            • 1




              @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
              – Marco
              Jul 31 '13 at 20:41












            up vote
            2
            down vote










            up vote
            2
            down vote









            In a straight way, no ...
            You cannot hard/soft link to a single file. links are nothing more and nothing less than pointer from one file to another.



            Now if you are worried about space and want to release the space you can do the following:



            for i in *
            do
            cat < "$i" >> destination_file &&
            rm -f -- "$i"
            done


            Basically, it will append the output to destination_file and remove the file afterwards. Also I'm assuming you don't need the original files.






            share|improve this answer














            In a straight way, no ...
            You cannot hard/soft link to a single file. links are nothing more and nothing less than pointer from one file to another.



            Now if you are worried about space and want to release the space you can do the following:



            for i in *
            do
            cat < "$i" >> destination_file &&
            rm -f -- "$i"
            done


            Basically, it will append the output to destination_file and remove the file afterwards. Also I'm assuming you don't need the original files.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jul 31 '13 at 20:42









            Stéphane Chazelas

            285k53525864




            285k53525864










            answered Jul 31 '13 at 20:07









            BitsOfNix

            4,03121531




            4,03121531







            • 1




              Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
              – Marco
              Jul 31 '13 at 20:10











            • Why not quote the variable ("$i") to allow for spaces in the file name?
              – Joseph R.
              Jul 31 '13 at 20:13










            • @JosephR. That doesn't help. If you have special characters it'll break.
              – Marco
              Jul 31 '13 at 20:16






            • 1




              @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
              – BitsOfNix
              Jul 31 '13 at 20:36






            • 1




              @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
              – Marco
              Jul 31 '13 at 20:41












            • 1




              Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
              – Marco
              Jul 31 '13 at 20:10











            • Why not quote the variable ("$i") to allow for spaces in the file name?
              – Joseph R.
              Jul 31 '13 at 20:13










            • @JosephR. That doesn't help. If you have special characters it'll break.
              – Marco
              Jul 31 '13 at 20:16






            • 1




              @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
              – BitsOfNix
              Jul 31 '13 at 20:36






            • 1




              @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
              – Marco
              Jul 31 '13 at 20:41







            1




            1




            Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
            – Marco
            Jul 31 '13 at 20:10





            Why do you parse ls? Just use for i in *. And why the loop in the first place? Just do cat * >> destination.
            – Marco
            Jul 31 '13 at 20:10













            Why not quote the variable ("$i") to allow for spaces in the file name?
            – Joseph R.
            Jul 31 '13 at 20:13




            Why not quote the variable ("$i") to allow for spaces in the file name?
            – Joseph R.
            Jul 31 '13 at 20:13












            @JosephR. That doesn't help. If you have special characters it'll break.
            – Marco
            Jul 31 '13 at 20:16




            @JosephR. That doesn't help. If you have special characters it'll break.
            – Marco
            Jul 31 '13 at 20:16




            1




            1




            @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
            – BitsOfNix
            Jul 31 '13 at 20:36




            @Marco in the OP he's worried about space. otherwise cat * >> destination was more than enough ... hence the loop to cat and remove the file.
            – BitsOfNix
            Jul 31 '13 at 20:36




            1




            1




            @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
            – Marco
            Jul 31 '13 at 20:41




            @AlexandreAlves My point is: i) This loop is terrible. ii) It it totally unnecessary. Just use cat * >dest followed by rm !(dest) (bash) or rm ^dest (zsh).
            – Marco
            Jul 31 '13 at 20:41

















             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f85097%2fhardlink-softlink-multiple-file-to-one-file%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Bahrain

            Postfix configuration issue with fips on centos 7; mailgun relay