Why has there been so little innovation with autoconf configure scripts in the Unix and Linux ecosystem? [closed]
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
0
down vote
favorite
A great deal of time is lost at the time of ./configure; especially when dependencies are missing and the subsequent calling thereof.
I've read many topics on this subject citing that caching the result of configure scripts for re-use with other configure scripts is error-prone due to stale results, things going out of sync, and that creating a common implementation for sharing results would be "very hard" [1].
I would like to note that I am aware that configure scripts already support caching but this is disabled by default [2].
If distributions already provide package managers which understand interdependencies and themselves already know what would be all of the common (or even most common) source dependencies, why is there no standard for some kind of configure cache available from the distro? One might be able to assume given certain conditions which can be evaluated by the package manager that many of these tests shouldn't need to be run at all. Or at the very least, not unless there is a failure during the configure.
Even though there are a number of other competing build systems, I find that configure is still the most prevalent by far. These scripts are shell-based, single-threaded, error-prone, often provide cryptic or no diagnostics, and are often extremely slow. It's not unusual for me to encounter a failure with a configure script or compilation due to a missing dependency that was not even part of configure.
Has this ever been addressed by any distro? For example, Gentoo does a tremendous amount of local compilation. Do they optimize any of this away?
I'm not looking for recommendations for build systems but rather a historical or cultural perspective on why things continue to rely so heavily on autoconf and configure for modern projects. This may very well fall within the bounds of opinion but I think that this is an extremely interesting topic which may have its fair share of facts based on build conventions, company policy, and greybeard customs.
A similar argument could made of mailing lists which are archaic compared to modern forms collaboration on the web but they are also simple and perform there function exactly the same now as they have always done with little to no change given its simplicity. Maybe autoconf follows a similar fate? (Disclaimer: I actually love mailing lists with any frustration being a result of poor support on the part of the mail client).
compiling history configure unix-philosophy autoconf
closed as primarily opinion-based by Stephen Harris, Jesse_b, Thomas Dickey, Rui F Ribeiro, dirkt Jul 28 at 17:44
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
add a comment |Â
up vote
0
down vote
favorite
A great deal of time is lost at the time of ./configure; especially when dependencies are missing and the subsequent calling thereof.
I've read many topics on this subject citing that caching the result of configure scripts for re-use with other configure scripts is error-prone due to stale results, things going out of sync, and that creating a common implementation for sharing results would be "very hard" [1].
I would like to note that I am aware that configure scripts already support caching but this is disabled by default [2].
If distributions already provide package managers which understand interdependencies and themselves already know what would be all of the common (or even most common) source dependencies, why is there no standard for some kind of configure cache available from the distro? One might be able to assume given certain conditions which can be evaluated by the package manager that many of these tests shouldn't need to be run at all. Or at the very least, not unless there is a failure during the configure.
Even though there are a number of other competing build systems, I find that configure is still the most prevalent by far. These scripts are shell-based, single-threaded, error-prone, often provide cryptic or no diagnostics, and are often extremely slow. It's not unusual for me to encounter a failure with a configure script or compilation due to a missing dependency that was not even part of configure.
Has this ever been addressed by any distro? For example, Gentoo does a tremendous amount of local compilation. Do they optimize any of this away?
I'm not looking for recommendations for build systems but rather a historical or cultural perspective on why things continue to rely so heavily on autoconf and configure for modern projects. This may very well fall within the bounds of opinion but I think that this is an extremely interesting topic which may have its fair share of facts based on build conventions, company policy, and greybeard customs.
A similar argument could made of mailing lists which are archaic compared to modern forms collaboration on the web but they are also simple and perform there function exactly the same now as they have always done with little to no change given its simplicity. Maybe autoconf follows a similar fate? (Disclaimer: I actually love mailing lists with any frustration being a result of poor support on the part of the mail client).
compiling history configure unix-philosophy autoconf
closed as primarily opinion-based by Stephen Harris, Jesse_b, Thomas Dickey, Rui F Ribeiro, dirkt Jul 28 at 17:44
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
2
Remember, the whole world is not Linux. The same script may run on Solaris, the various BSDs, AIX, HP-UX, MacOS,... Even within Linux there are various packaging systems (rpm/yum, dpkg/apt, portage...). Typically the best way to validate that a build dependency is met on a system is to test it and see if it works. Painful, but portable.
â Stephen Harris
Jul 28 at 15:45
As often as configuration and compilation can happen across Unix systems, it feels as though we've reached the point of "good enough" without much interest in further innovation. What we have certainly works. But it leaves me wondering why after so many years of autoconf that we don't have something universally better.
â Zhro
Jul 28 at 16:05
I think again we are mixing up the concept of Gnu and Linux. This whole issue is impossible to understand, until you know what Gnu and Linux are and are not.
â ctrl-alt-delor
Jul 28 at 16:50
One factor that may be affecting this, is that the GNU project wants all software to be possible to be built on as wide a selection of systems as possible.
â ctrl-alt-delor
Jul 28 at 16:52
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
A great deal of time is lost at the time of ./configure; especially when dependencies are missing and the subsequent calling thereof.
I've read many topics on this subject citing that caching the result of configure scripts for re-use with other configure scripts is error-prone due to stale results, things going out of sync, and that creating a common implementation for sharing results would be "very hard" [1].
I would like to note that I am aware that configure scripts already support caching but this is disabled by default [2].
If distributions already provide package managers which understand interdependencies and themselves already know what would be all of the common (or even most common) source dependencies, why is there no standard for some kind of configure cache available from the distro? One might be able to assume given certain conditions which can be evaluated by the package manager that many of these tests shouldn't need to be run at all. Or at the very least, not unless there is a failure during the configure.
Even though there are a number of other competing build systems, I find that configure is still the most prevalent by far. These scripts are shell-based, single-threaded, error-prone, often provide cryptic or no diagnostics, and are often extremely slow. It's not unusual for me to encounter a failure with a configure script or compilation due to a missing dependency that was not even part of configure.
Has this ever been addressed by any distro? For example, Gentoo does a tremendous amount of local compilation. Do they optimize any of this away?
I'm not looking for recommendations for build systems but rather a historical or cultural perspective on why things continue to rely so heavily on autoconf and configure for modern projects. This may very well fall within the bounds of opinion but I think that this is an extremely interesting topic which may have its fair share of facts based on build conventions, company policy, and greybeard customs.
A similar argument could made of mailing lists which are archaic compared to modern forms collaboration on the web but they are also simple and perform there function exactly the same now as they have always done with little to no change given its simplicity. Maybe autoconf follows a similar fate? (Disclaimer: I actually love mailing lists with any frustration being a result of poor support on the part of the mail client).
compiling history configure unix-philosophy autoconf
A great deal of time is lost at the time of ./configure; especially when dependencies are missing and the subsequent calling thereof.
I've read many topics on this subject citing that caching the result of configure scripts for re-use with other configure scripts is error-prone due to stale results, things going out of sync, and that creating a common implementation for sharing results would be "very hard" [1].
I would like to note that I am aware that configure scripts already support caching but this is disabled by default [2].
If distributions already provide package managers which understand interdependencies and themselves already know what would be all of the common (or even most common) source dependencies, why is there no standard for some kind of configure cache available from the distro? One might be able to assume given certain conditions which can be evaluated by the package manager that many of these tests shouldn't need to be run at all. Or at the very least, not unless there is a failure during the configure.
Even though there are a number of other competing build systems, I find that configure is still the most prevalent by far. These scripts are shell-based, single-threaded, error-prone, often provide cryptic or no diagnostics, and are often extremely slow. It's not unusual for me to encounter a failure with a configure script or compilation due to a missing dependency that was not even part of configure.
Has this ever been addressed by any distro? For example, Gentoo does a tremendous amount of local compilation. Do they optimize any of this away?
I'm not looking for recommendations for build systems but rather a historical or cultural perspective on why things continue to rely so heavily on autoconf and configure for modern projects. This may very well fall within the bounds of opinion but I think that this is an extremely interesting topic which may have its fair share of facts based on build conventions, company policy, and greybeard customs.
A similar argument could made of mailing lists which are archaic compared to modern forms collaboration on the web but they are also simple and perform there function exactly the same now as they have always done with little to no change given its simplicity. Maybe autoconf follows a similar fate? (Disclaimer: I actually love mailing lists with any frustration being a result of poor support on the part of the mail client).
compiling history configure unix-philosophy autoconf
edited Jul 28 at 16:07
asked Jul 28 at 15:31
Zhro
296212
296212
closed as primarily opinion-based by Stephen Harris, Jesse_b, Thomas Dickey, Rui F Ribeiro, dirkt Jul 28 at 17:44
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
closed as primarily opinion-based by Stephen Harris, Jesse_b, Thomas Dickey, Rui F Ribeiro, dirkt Jul 28 at 17:44
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
2
Remember, the whole world is not Linux. The same script may run on Solaris, the various BSDs, AIX, HP-UX, MacOS,... Even within Linux there are various packaging systems (rpm/yum, dpkg/apt, portage...). Typically the best way to validate that a build dependency is met on a system is to test it and see if it works. Painful, but portable.
â Stephen Harris
Jul 28 at 15:45
As often as configuration and compilation can happen across Unix systems, it feels as though we've reached the point of "good enough" without much interest in further innovation. What we have certainly works. But it leaves me wondering why after so many years of autoconf that we don't have something universally better.
â Zhro
Jul 28 at 16:05
I think again we are mixing up the concept of Gnu and Linux. This whole issue is impossible to understand, until you know what Gnu and Linux are and are not.
â ctrl-alt-delor
Jul 28 at 16:50
One factor that may be affecting this, is that the GNU project wants all software to be possible to be built on as wide a selection of systems as possible.
â ctrl-alt-delor
Jul 28 at 16:52
add a comment |Â
2
Remember, the whole world is not Linux. The same script may run on Solaris, the various BSDs, AIX, HP-UX, MacOS,... Even within Linux there are various packaging systems (rpm/yum, dpkg/apt, portage...). Typically the best way to validate that a build dependency is met on a system is to test it and see if it works. Painful, but portable.
â Stephen Harris
Jul 28 at 15:45
As often as configuration and compilation can happen across Unix systems, it feels as though we've reached the point of "good enough" without much interest in further innovation. What we have certainly works. But it leaves me wondering why after so many years of autoconf that we don't have something universally better.
â Zhro
Jul 28 at 16:05
I think again we are mixing up the concept of Gnu and Linux. This whole issue is impossible to understand, until you know what Gnu and Linux are and are not.
â ctrl-alt-delor
Jul 28 at 16:50
One factor that may be affecting this, is that the GNU project wants all software to be possible to be built on as wide a selection of systems as possible.
â ctrl-alt-delor
Jul 28 at 16:52
2
2
Remember, the whole world is not Linux. The same script may run on Solaris, the various BSDs, AIX, HP-UX, MacOS,... Even within Linux there are various packaging systems (rpm/yum, dpkg/apt, portage...). Typically the best way to validate that a build dependency is met on a system is to test it and see if it works. Painful, but portable.
â Stephen Harris
Jul 28 at 15:45
Remember, the whole world is not Linux. The same script may run on Solaris, the various BSDs, AIX, HP-UX, MacOS,... Even within Linux there are various packaging systems (rpm/yum, dpkg/apt, portage...). Typically the best way to validate that a build dependency is met on a system is to test it and see if it works. Painful, but portable.
â Stephen Harris
Jul 28 at 15:45
As often as configuration and compilation can happen across Unix systems, it feels as though we've reached the point of "good enough" without much interest in further innovation. What we have certainly works. But it leaves me wondering why after so many years of autoconf that we don't have something universally better.
â Zhro
Jul 28 at 16:05
As often as configuration and compilation can happen across Unix systems, it feels as though we've reached the point of "good enough" without much interest in further innovation. What we have certainly works. But it leaves me wondering why after so many years of autoconf that we don't have something universally better.
â Zhro
Jul 28 at 16:05
I think again we are mixing up the concept of Gnu and Linux. This whole issue is impossible to understand, until you know what Gnu and Linux are and are not.
â ctrl-alt-delor
Jul 28 at 16:50
I think again we are mixing up the concept of Gnu and Linux. This whole issue is impossible to understand, until you know what Gnu and Linux are and are not.
â ctrl-alt-delor
Jul 28 at 16:50
One factor that may be affecting this, is that the GNU project wants all software to be possible to be built on as wide a selection of systems as possible.
â ctrl-alt-delor
Jul 28 at 16:52
One factor that may be affecting this, is that the GNU project wants all software to be possible to be built on as wide a selection of systems as possible.
â ctrl-alt-delor
Jul 28 at 16:52
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
1
down vote
Any answer will be at least a little bit speculation IMO, as what constitutes "innovation" differs from person to person (and whether it's a good thing!) Because autoconf
is designed to be language- and architecture-agnostic for the most part, there's a lot of inertia keeping the desire to change low: you have to consider that some configure scripts are probably decades old by now, and people do not want to have to rewrite stuff that works.
Some of the restrictions that autoconf
faces are architectural: For example, how can you use multithreading in a step that checks for multithreading? Will version 2.5 of libfoo
work with a program that says it needs version 1.8? Other issues that you cite are often due to underspecification of dependencies: I've had many configure scripts that simply forget to list all of their direct dependencies. Lastly, autoconf
seems to have a fairly limited number of maintainers, making it hard to make huge changes.
In some respects, package managers are where the innovation takes place. They can deal with concepts like different packages needing different versions of the same library, identify workarounds to dependencies that are no longer available, and (for source-compiling distros like Gentoo) provide patches to clean up configure scripts and source code so they work properly.
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....
â ivanivan
Jul 28 at 17:39
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
Such things do not belong intoconfigure
. You can do all this in my software usingmake
command line macro definitions.
â schily
Jul 28 at 18:19
add a comment |Â
up vote
1
down vote
Autoconf of course caches the results and if you use it correctly, you rarely run configure
.
My schilytools project (> 4000 source files, approx. 770000 lines of code) currently need approx. 800 autoconf tests. Running all tests takes 28 seconds on a 2.7 GHz Intel laptop, 2-3 hours on a HP-9000 pizza box from 1995, 4-6 hours on a Sun3/60 from 1996 and approx. a day on a Vax 11/780.
I still don't have to worry since the configure script is changed only approx. 5 times a year and rerunning configure
takes ony 4 seconds on the Intel laptop as all the other tests are all cached. I of course have a make
rule that causes configure
to be run again only if the configure results are out of date.
Downstream people have a different view. They take the project as throw away packet and run a full blown configure
30 times a year and that takes 15 minutes a year.
I on the other side only have to wait 25 seconds a year....
So the problem is how the downstream people use the software.
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Any answer will be at least a little bit speculation IMO, as what constitutes "innovation" differs from person to person (and whether it's a good thing!) Because autoconf
is designed to be language- and architecture-agnostic for the most part, there's a lot of inertia keeping the desire to change low: you have to consider that some configure scripts are probably decades old by now, and people do not want to have to rewrite stuff that works.
Some of the restrictions that autoconf
faces are architectural: For example, how can you use multithreading in a step that checks for multithreading? Will version 2.5 of libfoo
work with a program that says it needs version 1.8? Other issues that you cite are often due to underspecification of dependencies: I've had many configure scripts that simply forget to list all of their direct dependencies. Lastly, autoconf
seems to have a fairly limited number of maintainers, making it hard to make huge changes.
In some respects, package managers are where the innovation takes place. They can deal with concepts like different packages needing different versions of the same library, identify workarounds to dependencies that are no longer available, and (for source-compiling distros like Gentoo) provide patches to clean up configure scripts and source code so they work properly.
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....
â ivanivan
Jul 28 at 17:39
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
Such things do not belong intoconfigure
. You can do all this in my software usingmake
command line macro definitions.
â schily
Jul 28 at 18:19
add a comment |Â
up vote
1
down vote
Any answer will be at least a little bit speculation IMO, as what constitutes "innovation" differs from person to person (and whether it's a good thing!) Because autoconf
is designed to be language- and architecture-agnostic for the most part, there's a lot of inertia keeping the desire to change low: you have to consider that some configure scripts are probably decades old by now, and people do not want to have to rewrite stuff that works.
Some of the restrictions that autoconf
faces are architectural: For example, how can you use multithreading in a step that checks for multithreading? Will version 2.5 of libfoo
work with a program that says it needs version 1.8? Other issues that you cite are often due to underspecification of dependencies: I've had many configure scripts that simply forget to list all of their direct dependencies. Lastly, autoconf
seems to have a fairly limited number of maintainers, making it hard to make huge changes.
In some respects, package managers are where the innovation takes place. They can deal with concepts like different packages needing different versions of the same library, identify workarounds to dependencies that are no longer available, and (for source-compiling distros like Gentoo) provide patches to clean up configure scripts and source code so they work properly.
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....
â ivanivan
Jul 28 at 17:39
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
Such things do not belong intoconfigure
. You can do all this in my software usingmake
command line macro definitions.
â schily
Jul 28 at 18:19
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Any answer will be at least a little bit speculation IMO, as what constitutes "innovation" differs from person to person (and whether it's a good thing!) Because autoconf
is designed to be language- and architecture-agnostic for the most part, there's a lot of inertia keeping the desire to change low: you have to consider that some configure scripts are probably decades old by now, and people do not want to have to rewrite stuff that works.
Some of the restrictions that autoconf
faces are architectural: For example, how can you use multithreading in a step that checks for multithreading? Will version 2.5 of libfoo
work with a program that says it needs version 1.8? Other issues that you cite are often due to underspecification of dependencies: I've had many configure scripts that simply forget to list all of their direct dependencies. Lastly, autoconf
seems to have a fairly limited number of maintainers, making it hard to make huge changes.
In some respects, package managers are where the innovation takes place. They can deal with concepts like different packages needing different versions of the same library, identify workarounds to dependencies that are no longer available, and (for source-compiling distros like Gentoo) provide patches to clean up configure scripts and source code so they work properly.
Any answer will be at least a little bit speculation IMO, as what constitutes "innovation" differs from person to person (and whether it's a good thing!) Because autoconf
is designed to be language- and architecture-agnostic for the most part, there's a lot of inertia keeping the desire to change low: you have to consider that some configure scripts are probably decades old by now, and people do not want to have to rewrite stuff that works.
Some of the restrictions that autoconf
faces are architectural: For example, how can you use multithreading in a step that checks for multithreading? Will version 2.5 of libfoo
work with a program that says it needs version 1.8? Other issues that you cite are often due to underspecification of dependencies: I've had many configure scripts that simply forget to list all of their direct dependencies. Lastly, autoconf
seems to have a fairly limited number of maintainers, making it hard to make huge changes.
In some respects, package managers are where the innovation takes place. They can deal with concepts like different packages needing different versions of the same library, identify workarounds to dependencies that are no longer available, and (for source-compiling distros like Gentoo) provide patches to clean up configure scripts and source code so they work properly.
answered Jul 28 at 16:45
ErikF
2,6461413
2,6461413
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....
â ivanivan
Jul 28 at 17:39
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
Such things do not belong intoconfigure
. You can do all this in my software usingmake
command line macro definitions.
â schily
Jul 28 at 18:19
add a comment |Â
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....
â ivanivan
Jul 28 at 17:39
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
Such things do not belong intoconfigure
. You can do all this in my software usingmake
command line macro definitions.
â schily
Jul 28 at 18:19
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various
./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....â ivanivan
Jul 28 at 17:39
As long as you bring up Gentoo, don't forget its use flags so that you can somewhat automate various
./configure
options. I still recall the time I did a stage 1 from scratch (Gentoo had just come out) and I did all of X and the browser (Mozilla) and I forgot to add "+jpeg" to my use flags....â ivanivan
Jul 28 at 17:39
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
A program that needs manual changes on configure options is badly maintained by it's author. My software compiles out of the box on all supported platforms.
â schily
Jul 28 at 18:02
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
@schily Normally, I agree that you shouldn't need to override the settings in a configure script. However, sometimes you want to use nonstandard locations and filename (usually for testing), and sometimes the script simply guesses wrong! I've had that happen a couple of times.
â ErikF
Jul 28 at 18:09
Such things do not belong into
configure
. You can do all this in my software using make
command line macro definitions.â schily
Jul 28 at 18:19
Such things do not belong into
configure
. You can do all this in my software using make
command line macro definitions.â schily
Jul 28 at 18:19
add a comment |Â
up vote
1
down vote
Autoconf of course caches the results and if you use it correctly, you rarely run configure
.
My schilytools project (> 4000 source files, approx. 770000 lines of code) currently need approx. 800 autoconf tests. Running all tests takes 28 seconds on a 2.7 GHz Intel laptop, 2-3 hours on a HP-9000 pizza box from 1995, 4-6 hours on a Sun3/60 from 1996 and approx. a day on a Vax 11/780.
I still don't have to worry since the configure script is changed only approx. 5 times a year and rerunning configure
takes ony 4 seconds on the Intel laptop as all the other tests are all cached. I of course have a make
rule that causes configure
to be run again only if the configure results are out of date.
Downstream people have a different view. They take the project as throw away packet and run a full blown configure
30 times a year and that takes 15 minutes a year.
I on the other side only have to wait 25 seconds a year....
So the problem is how the downstream people use the software.
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
add a comment |Â
up vote
1
down vote
Autoconf of course caches the results and if you use it correctly, you rarely run configure
.
My schilytools project (> 4000 source files, approx. 770000 lines of code) currently need approx. 800 autoconf tests. Running all tests takes 28 seconds on a 2.7 GHz Intel laptop, 2-3 hours on a HP-9000 pizza box from 1995, 4-6 hours on a Sun3/60 from 1996 and approx. a day on a Vax 11/780.
I still don't have to worry since the configure script is changed only approx. 5 times a year and rerunning configure
takes ony 4 seconds on the Intel laptop as all the other tests are all cached. I of course have a make
rule that causes configure
to be run again only if the configure results are out of date.
Downstream people have a different view. They take the project as throw away packet and run a full blown configure
30 times a year and that takes 15 minutes a year.
I on the other side only have to wait 25 seconds a year....
So the problem is how the downstream people use the software.
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Autoconf of course caches the results and if you use it correctly, you rarely run configure
.
My schilytools project (> 4000 source files, approx. 770000 lines of code) currently need approx. 800 autoconf tests. Running all tests takes 28 seconds on a 2.7 GHz Intel laptop, 2-3 hours on a HP-9000 pizza box from 1995, 4-6 hours on a Sun3/60 from 1996 and approx. a day on a Vax 11/780.
I still don't have to worry since the configure script is changed only approx. 5 times a year and rerunning configure
takes ony 4 seconds on the Intel laptop as all the other tests are all cached. I of course have a make
rule that causes configure
to be run again only if the configure results are out of date.
Downstream people have a different view. They take the project as throw away packet and run a full blown configure
30 times a year and that takes 15 minutes a year.
I on the other side only have to wait 25 seconds a year....
So the problem is how the downstream people use the software.
Autoconf of course caches the results and if you use it correctly, you rarely run configure
.
My schilytools project (> 4000 source files, approx. 770000 lines of code) currently need approx. 800 autoconf tests. Running all tests takes 28 seconds on a 2.7 GHz Intel laptop, 2-3 hours on a HP-9000 pizza box from 1995, 4-6 hours on a Sun3/60 from 1996 and approx. a day on a Vax 11/780.
I still don't have to worry since the configure script is changed only approx. 5 times a year and rerunning configure
takes ony 4 seconds on the Intel laptop as all the other tests are all cached. I of course have a make
rule that causes configure
to be run again only if the configure results are out of date.
Downstream people have a different view. They take the project as throw away packet and run a full blown configure
30 times a year and that takes 15 minutes a year.
I on the other side only have to wait 25 seconds a year....
So the problem is how the downstream people use the software.
answered Jul 28 at 18:01
schily
8,39221435
8,39221435
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
add a comment |Â
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
This is an interesting perspective and I enjoyed your citations of running the same script across multiple disparate systems. But this is answer is bias towards the experience of the developer or maintainer and isn't analogous to the user experience of setting up many packages and configuring them all from source for first time. This is why I mentioned caching to be handled by the distro rather than some user-level cache.
â Zhro
Jul 29 at 18:05
add a comment |Â
2
Remember, the whole world is not Linux. The same script may run on Solaris, the various BSDs, AIX, HP-UX, MacOS,... Even within Linux there are various packaging systems (rpm/yum, dpkg/apt, portage...). Typically the best way to validate that a build dependency is met on a system is to test it and see if it works. Painful, but portable.
â Stephen Harris
Jul 28 at 15:45
As often as configuration and compilation can happen across Unix systems, it feels as though we've reached the point of "good enough" without much interest in further innovation. What we have certainly works. But it leaves me wondering why after so many years of autoconf that we don't have something universally better.
â Zhro
Jul 28 at 16:05
I think again we are mixing up the concept of Gnu and Linux. This whole issue is impossible to understand, until you know what Gnu and Linux are and are not.
â ctrl-alt-delor
Jul 28 at 16:50
One factor that may be affecting this, is that the GNU project wants all software to be possible to be built on as wide a selection of systems as possible.
â ctrl-alt-delor
Jul 28 at 16:52