On dependency version pinning
Two ways of using a module or perl API function:
First way:
1. Read documentation. Check changelog, check open bugs. Use only "good" modules.
3. Decide which API of module/perl function to use and how.
3. Write code, write tests. Write assertions in production code. Write proper error handling.
4. Test on several perl versions and several module versions.
5. If tests fail somewhere, investigate, change minimum version requirements or workaround problem.
6. Write down strict minimum dependencies in makefile etc.
Second way:
1. Briefly read documentation.
2. Write code
3. Test manually.
4. pin module version
5. always use one version of perl, use perlbrew, never upgrade.
So I think Second Way just introduces a technical debt. You save time during development, but the end
you have no idea how your code works and where.
I see lot's of advices to use perlbrew instead of system perl, because system perl can be broken (it can, indeed), articles about Gemfile-lock-like systems.
It looks like those advices mostly for/from people who prefers Second Way.
You don't really need version pinning if you use First way (well, it's useful but you can live without that), it's also not a problem to upgrade perl, if you do so and find a bug, that most likely will be bug in your code. And fixing a bug means reducing technical debt.
Both ways are OK, the Right Way should be determined from business requirements, but I think people
should understand why and when use which approach.
> So I think Second Way just introduces a technical debt.
No, that's the reason why carton has the update command to update versions in your snapshot to the latest that satisfies your requirement, just like you can do with `cpan-outdated | cpanm` with your system perl.
(I'm sure Pinto has the similar feature to update modules to the latest)
And that's where Carton excels at, rather than *manually* pinning versions in cpanfile, which makes things difficult to update to the latest version.
> You save time during development, but the end
> you have no idea how your code works and where.
That's completely the opposite.
Tracking dependencies with Carton is even better because you have the *concrete* idea of how your code works with which version, along with the ability to roll back when you don't have time to investigate whether it's a bug in your code or regression in the module.
Unlike where you always rely on the latest versions on CPAN and you have to manually find out which module it used to work with.
Now by re-reading your post, it feels funny that the "First way" reads more like the workflow with Carton/Pinto.
> 5. If tests fail somewhere, investigate, change minimum version requirements or workaround problem.
> 6. Write down strict minimum dependencies in makefile etc.
replace 6 with Carton's cpanfile and running `carton install` or equivalent in Pinto management command. That's what these tools are for, and we're all in agreement.
"Pinning version", in Carton/Pinto context, doesn't mean to pin versions forever and never upgrade. It is to pin versions between dev instances and deployment environments, then track history.
"Pinning version", in Carton/Pinto context, doesn't mean to pin versions forever and never upgrade
yes, I agree.
> replace 6 with Carton's cpanfile and running `carton install` or equivalent in Pinto management command.
> That's what these tools are for, and we're all in agreement.
my point was that Makefile is enough, because specifying minimum version is enough.
Minumum version of module is kinda natural thing: not all API function exist in very first version of module. New functions are added in newer versions.
CPAN module should be always backward compatible with older versions (if module author introduce large breaking changes, he's better to just rename the module).
Complex version requirements like "> 0.5, != 0.98, != 0.99" are likely indicate bug in modules.
Version requirements like "> 0.5 but 0.91 already exist) likely indicate that:
1) module is broken after 0.91
2) module is total crap so it's likely to be broken
3) you use some fancy/undocumented API so you are worrying much about compatibility.
3) you don't have good enought test suite
all of above is technical dept.
if you just worrying about new versions of modules deployed to production, that could be caught by running testsuite during each deploy (however, not all deploy
system runs tests on production right after deploy, and there can be new versions released between testing on CI server and actual deploy).
> No, that's the reason why carton has the update command to update versions in your snapshot to the latest that satisfies your requirement
> "Pinning version", in Carton/Pinto context, doesn't mean to pin versions forever and never upgrade.
yes, agree: you can bump pinned versions, and there is a command to automatically update to latest versions. However that still means _manual_ process.
You need to re-pin versions on dev machine, and test your code (this means _manual_ testing).
That is likely indicate that testsuite is bad.
My post was about situation when lots of users use/advice to use version pining, it's likely indicate that they have technical dept:
> I see lot's of advices ...
> It looks like those advices mostly for/from people who prefers Second Way.
Of course testsuite cannot completely replace manual testing, also some things cannot be tested well with unit tests (GUI, smoke tests, race conditions, timing issues).
Also there can be (a lot of) scenarios when version pinning is the right thing to do and does not indicate any techbical dept (I myself had !=0.090, !=0.091 prereqs for
Test::Deep when realized that those versions are broken, and replaced it with ">=0.092" when realized that "!=" does not work on most CPAN clients).
But you know, if lots of people use version pining for _all_ of their modules, and then upgrade it (all at once) time-to-time, and then try to test whole application manually,
that more likely means that they have no idea how their code works, rather than they very careful about versions.
“During deployment to the live system” is the never the right time to find out that some module you depend on had an update that affects the behaviour of your own code for whatever reason.
The point of pinning versions with tools like Carton is not to be able to never upgrade modules. The point is to be able to do upgrades systematically and deliberately, in your development environment, where you can test them – not randomly depending on the time of day you happened to push the deploy button. Then, when you deploy, you are sure that production is running the code you tested – including all of the modules you tested with, in the versions you tested with.
What if some module changed in a way that does not outright break your code, but instead seems to work fine, except that it silently destroys data? Do you really want to find that out only after deploying to production?
The right time to upgrade your dependencies is when you are still working on the code, when you have time to test the upgrade and make sure it works – not when you are just about to push the code live.
Then if you find an upgrade causes a problem, you can of course choose to hold it off – just as easily as you can fix your code to work with the new version. Like all questions of technical debt, this is a decision to be made consciously. It should not be based on dogma.
Will some people make bad choices then? Sure. But how is their bad judgement possibly an argument in favour of making your production environment random and non-reproducible?
vsespb:
> CPAN module should be always backward compatible with older versions (if module author introduce large breaking changes, he's better to just rename the module).
I disagree, and i believe not all CPAN authors share that idea. And you can't enforce that either.
> if you just worrying about new versions of modules deployed to production, that could be caught by running testsuite during each deploy
In an ideal world where all the bugs in your app can be caught by test suite that covers 100% of your code, maybe.
> (however, not all deploy
system runs tests on production right after deploy, and there can be new versions released between testing on CI server and actual deploy).
You seem to understand the exact issue, and still say the solution is unnecessary, and I don't understand.
> You need to re-pin versions on dev machine, and test your code (this means _manual_ testing).
I don't follow how `carton update; prove` is more manual than doing updating your modules in site_perl and running the test again.
@Aristotle
> some module changed in a way that does not outright break your code, but instead seems to work fine, except that it silently destroys data?
I find this example unrealistic:
a) new version of module destroys data
b) your tests does not catch it
c) you can catch it with easy using manual testing by hands
anyway, for manual testings there is a Staging environment (for web applications). Deploy to staging made just before production deploy.
> "During deployment to the live system"
see my notes below
Otherwise I agree to your post.
@Tatsuhiko Miyagawa
> I disagree, and i believe not all CPAN authors share that idea. And you can't enforce that either.
I believe using such module and not dropping it is an example of technical debt. There can be exceptions - module can be too good to drop, or module is 15 years old and
breaking changes were introduced 7 years ago.
Also using such module, where author is not documented that he's not going to maintain backward compatibility is technical debt too.
>> (however, not all deploy system runs tests on production right after deploy, and there can be new versions released between testing on CI server and actual deploy).
> You seem to understand the exact issue, and still say the solution is unnecessary, and I don't understand.
I was describing exception to the rule:
a) Server development (where we can talk about production and development environments, and where only one production (or cluster) exist).
Not all perl code is web server applications! Some are standalone applications (like cartoon itself, Padre, ack)
AND
b) That particular way to deploy, when tests ran on CI server. I've seen different systems, when tests ran on Production or Staging (and if tests fails, worker processes
not restarted, migrations not ran, symlinks to code not changed)
AND
c) Modules upgraded each deploy.
AND
d) There is no Staging or no manual testing on staging.
AND
e) A chance of (race condition between testing on CI and deploying to prod) * (chance of broken module) is a real issue.
Of course if one develop a web applications with deploy setup described above, he'd better pin versions and avoid race conditions (but this is lower priority
compared to real outstanding bugs). However my idea was that pining usually happen because there are so many bugs, that developer have no idea how his code works
and why it breaks with certain perl/module versions, so pining is higher priority #1, otherwise it simply breaks.
>> You need to re-pin versions on dev machine, and test your code (this means _manual_ testing).
> I don't follow how `carton update; prove` is more manual than doing updating your modules in site_perl and running the test again.
I was talking about _manual_ testing (it's when you run your application and click here and there to see if it still works), not manual running of testsuite.
Here is notice about cartoon in my original post:
> I see lot's of advices to use perlbrew instead of system perl, because system perl can be broken (it can, indeed), articles about Gemfile-lock-like systems.
> It looks like those advices mostly for/from people who prefers Second Way.
It was not about carton workflow is bad or that it's 100% indication of technical dept.
> (but this is lower priority compared to real outstanding bugs). However my idea was that pining usually happen because there are so many bugs,
"lower priority", "usually happen" - these are all subjective. When a production system fails to serve your customer's requests, fixing production code is a higher priority than fixing a bug in your module and waiting for the upstream to merge your patch to release on CPAN.
--
All I can say is that, you are trying to frame things as if there are only two ways to do development:
a) 100% test coverage, CPAN modules are always backwards compatible, later versions of modules are always better than older versions, and all the bugs, be it in your code or CPAN modules, can be caught by your awesome test suite.
b) No test, only manual testing, not reading documentation. Too lazy to update modules, so let's pin the modules, use one version of perl and not upgrade forever.
Heck, your original post says there's this The First Way, and The Second Way.
It's not as simple as that.
There's no such things on CPAN that never breaks backward compatibilities. CPAN has never been a place where the latest versions of any single module have no bugs at all.
Pinning modules and tracking history is to ensure these problems not happen in the middle of development or on the way from development to deployment, IN ADDITION TO testing, updating and fixing bugs as it's found - NOT to avoid testing or never upgrade modules.
Don't try to frame the practice of freezing versions as the indication of anything related to the software development practice.
> Don't try to frame the practice of freezing versions as the indication of anything related to the software development practice.
That's exactly what I am trying to do!