Testing updates of external dependencies



  • We are having a discussion about when we update a NuGet package or another external dependency. If we should update to this version everywhere, so we only have one version on the whole project. Some think we should, others think we shouldn't. The main reason to not update all location is that we need to re-test everything that uses this.

    Can we trust external suppliers to test their code, so that we can upgrade minor versions freely? Is having basic unit-test coverage enough to test the changes.

    Checklist:

    • Does it build
    • Check automated test coverage for dependency usage
    • Read change-log in a pair to estimate risks

    Or there other things we need to consider when updating libraries?



  • Personally, I'd prefer to have only one level of any given package in use (or one in development and one in production, if you're going to be moving to a new level of code on the next deployment), as it reduces your vulnerability surface in the case of security bugs (I'm thinking of things like the recent bash and imagemagik issues here).

    With that said, though, no, I don't think you can trust external suppliers to test their code sufficiently (again, see the above two bugs which were present for years in the code in question).

    Reading the change log may not be sufficient warning of changes, either; what is marked as a bug fix may be changing behavior that you believed was correct before, for example.

    I'd say that as full a regression test as possible is called for any time you update libraries you depend on. To reduce the risk, you might want to have a test suite specifically for testing the libraries you use, and if they're open source, you might contribute some test cases to their build infrastructure.



Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2