More specifically, I’m thinking about two different modes of development for a library (private to the company) that’s already relied upon by other libraries and applications:

  1. Rapidly develop the library “in isolation” without being slowed down by keeping all of the users in sync. This causes more divergence and merge effort the longer you wait to upgrade users.
  2. Make all changes in lock-step with users, keeping everyone in sync for every change that is made. This will be slower and might result in wasted work if experimental changes are not successful.

As a side note: I believe these approaches are similar in spirit to the continuum of microservices vs monoliths.

Speaking from recent experience, I feel like I’m repeatedly finding that users of my library have built towers upon obsolete APIs, because there have been multiple phases of experimentation that necessitated large changes. So with each change, large amounts of code need to be rewritten.

I still think that approach #1 was justified during the early stages of the project, since I wanted to identify all of the design problems as quickly as possible through iteration. But as the API is getting closer to stabilization, I think I need to switch to mode #2.

How do you know when is the right time to switch? Are there any good strategies for avoiding painful upgrades?

  • thelastknowngod@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Yeah for this situation, versioned APIs are the answer. If, for example, you look at the kubernetes ecosystem, the entire thing is based on APIs and every resource starts by specifying an api version on the very first line.

    apiVersion: v1
    kind: Namespace 
    metadata: 
        name: example-namespace
    

    This is how they can make upstream changes and not break existing environments in the process.

    • tatterdemalion@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’m not suggesting that my library is unversioned. It’s totally version controlled, and users can upgrade to whatever revision they want to pull in the changes. The changes I make upstream don’t affect anyone downstream until they decide to upgrade. It could even adhere to SemVer, but my problem remains: how to minimize rewriting user code? Is it better to have more small upgrades or fewer large upgrades? When is one strategy preferable to another?

      • thelastknowngod@lemm.ee
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        SemVer seems logical. If most of your changes are breaking, I don’t think it really matters if you are releasing often or occasionally though… If it’s often, the users will get fatigued with upgrades. If it’s occasionally, they’ll be overwhelmed and push it off.

        If most of your changes are breaking, you should just disclose that the software is in an alpha/beta state and that it can’t be depended on to remain consistent until you have a defined policy about what gets released and when. It will be up to the users to decide if they are comfortable with those terms.

      • echo64@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Using git is not having a versioned library for what it’s worth. Users can’t get the latest fixes by picking a newer commit without building against the changes you put into your libraries apis. It sounds like your library is indeed entirely unversioned.

        • tatterdemalion@programming.devOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I also do SemVer-compliant releases. Backporting fixes is possible.

          It doesn’t change the fact that there are large breaking changes between versions. The only users of my library are within the same company, and we have all of the convenience of planning out the changes together.

          The challenge arises from developers doing large amounts of R&D on separate but coupled libraries. My library happens to be a very central dependency.

          • MagicShel@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            If your library is a core dependency and it is constantly having breaking updates then something is deeply unwell in the environment. That is not sustainable, and it sounds like your library was created without a clear idea of what it should do.

            I’ve been around long enough to know these things happen, but you’re not going to find a good way forward because there isn’t one. This is going to be a pain point until either the library is stable or the project fails.

            • tatterdemalion@programming.devOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I think it’s close to stability. And the scope of the library hasn’t changed. It’s just solving a complex problem that requires several very large data structures, and I’ve needed to address a couple important issues along the way.

  • hascat@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Are there any good strategies for avoiding painful upgrades?

    If you’re not already doing so, hold design reviews with your users. Breaking API changes should be communicated early and in a way that makes it clear how the users benefit from the change. If the users don’t benefit, you should reconsider why you’re making changes in the first place.

  • tinker_james@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Context:

    I’m a dev that consumes company wide libraries, not an author of such libraries. So the following comes from that perspective.

    A couple questions:

    1. Is development and consumption of your library happening in parallel? It sounds like you use the users to vet new features or major changes… is that correct? (They are iterating with you and reporting back on issues or desired changes)
    2. Is your library made up of a group of isolated components? Or is it a library that does one or two major things and so a breaking change literally changes the whole of what the library does?
    3. How are the consumers of your library when it comes to adopting changes? Do they readily do it? Is there a good bit of inertia?

    My thoughts:

    First off, SemVer is definitely going to be important. Also, it sounds like you’re working toward API stabilization which is going help iterating in the future.

    My idea 1:

    If your library is made up of several isolated components, what about doing major releases (ex 2.x.x -> 3.x.x) more frequently? Only include a small subset of breaking changes for one or two components rather than jamming a whole bunch in there just because it’s a “major version release”. The result is you could move quickly and iterate while also minimizing the impact on ALL of your users every release. Some of your users may be able to upgrade to the latest without having to touch much or any of their code.

    My idea 2:

    Do frequent major release (ex 2.x.x - 3.x.x) but always start with an “alpha” release that early adopters could implement and provide feedback on. This would shield the majority of your consumer’s code from having to iterate frequently but would also require you to enlist a group of committed early adopters that are diligent about iterating their code as often as you release.

    Feedback on the original option 1 and 2

    Option 1

    This could work if your users are excited about your releases. But, it could result in people NEVER upgrading because it’s too much work to do so. (I’ve seen this happen. No one upgrades until they absolutely have to.)

    Option 2

    Depending on the size of your company, this will be a lot of work for you and will slow you down. If you’re using your users to vet out new features, then everyone is going to have to iterate frequently (like you said) if experimental changes don’t work out.

    • tatterdemalion@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thanks for your thoughtful reply.

      1. Yes. Yes.
      2. One or two major things. Breaking changes will usually result in a data structure format changing so algorithms that traverse the data structure need to be rewritten.
      3. One consumer is diligent about upgrading. The rest are much slower or rely on me to do it, but they continue building on top of an old version even after a new version is released.

      I like your idea of doing more frequent major releases and limiting the size of breaking changes within each release. It seems like a good compromise.

      • tinker_james@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Hm. In that case, smaller more frequent breaking changes may also not be ideal. It sounds like no matter how small the breaking change, everyone who uses the library is going to have to update their code… and if it’s happening frequently, that could get annoying.

        This may be completely off-base, but just going off of what you said about data traversal, would it be completely out of scope for your library to provide a consistent interface for getting/traversing the data it is responsible for? Or do the consumers all use/traverse the returned data in very unique ways such that you couldn’t really develop a “general” API of sorts.

        • tatterdemalion@programming.devOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          would it be completely out of scope for your library to provide a consistent interface for getting/traversing the data it is responsible for?

          This is actually something I’ve been considering. I think it would make sense for me to see what existing traversals could be upstreamed into my library. Some of them might be very domain-specific, but others might be generic enough to be generally useful.

  • itsybitesyspider@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I like library providers that can provide mechanical upgrade instructions. For example:

    model.adjust(x,1,y) is now model.single(Adjustment.Foo, x).with_attribute(y)

    Or whatever. Then people can go through your instructions find-and-replacing the changes, or even better, have an automated tool do it.

    Also you pay some of the maintenance burden by writing all this documentation, so you have a some stake in keeping the changes minimal.

  • dave@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    One approach I’ve seen (from a user pov, not dev, so I’ve no idea of the code bloat it might cause) is to pass the API version number in the call. Then your api can be backwards compatible for 2 or 3 versions, giving other users time to upgrade their code. It de-couples things to give you all a bit of slack for both rapid iteration and stability.

    But it also depends on the ‘contract’ between you and the users so be very clear how long / how many versions will be available. Probably will involve a ‘use by’ date.

    • tatterdemalion@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I don’t understand. If you are able to upgrade by just passing a new version number to an API, then the API signature hasn’t really changed enough to necessitate any changes on behalf of the user, right? Like, the API function signature hasn’t changed?

      The kinds of rapid iteration I’m talking about might involve completely removing APIs and making large breaking changes to the API surface itself, requiring user code to be rewritten to some degree.

      • dave@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        We’ll of course it depends on the scale of the changes. Depending on how your calling them, the version could be in the url, such as zooms api including /v2/ in the urls. Then you can introduce /v3 with many changes whilst leaving /v2 in place for some amount of time.

        If /v3 also means a complete change of database and other underlying infrastructure (eg removing the concept of a zoom meeting), then you’ve got different challenges. Those are probably about overall design, not api.

          • thelastknowngod@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            The idea doesn’t change if it’s rest or a code library. The version definition would just go in you requirements.txt or go.mod or whatever instead of a url endpoint.

  • kersplort@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Get good at the three point turn.

    • Add the new code path/behavior. Release - this can be a minor version in semver.
    • Mark the old code path or behavior as deprecated. Release - this can be another minor version.
      • In between here, clean up any dependencies or give your users time to clean up.
    • Remove the old code path or behavior. Release. If you’re using semver, this is the major version change.

    This is a stable way to make changes on any system that has a dependency on another platform, repository, or system. It’s good practice for anything on the web, as users may have logged in or long running sessions, and it works for systems that call each other and get released on different cadences.

  • Lmaydev@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Users will become dependent on anything you release.

    Microsoft has to be careful with private and internal members as users will use reflection and become dependent on them. Meaning internal changes break customer code.

    Admittedly that is over kill in most cases. I take the stance if they are dependent on private state it’s their fault if a release breaks it.

    But the point remains you can’t easily change existing APIs once released.

    As mentioned above semantic versioning is a good solution to this. At least then they know when an upgrade will cause breaking changes.

    If it can be avoided don’t put out anything that will likely be retired. Releasing experimental features that are likely to be replaced is always going to be bad.

    If you release a feature you need to plan to support it essentially.

  • coaleh@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’d say the ideal thing to aim for is case 2-ish. Building in isolation isn’t great and dog-fooding your lib (via their projects) can help reduce the amount of future breaking changes (getting things right the first time). I.e. Ideally your helping people upgrade when needed and seeing their problems. Even if your forking their projects to try your experimental changes out for them and then provide a PR? That’s probably a lot easier said than done though xD

    A dependecy management system to let consumers know when a new version is available could go a long way, or you pushing for them to update by talking to them (as this is all internal?).

    Basically reducing the distance between teams and getting the tightest possible feedback loops should be the goal.

    That’s my rushed 2p while waiting for a haircut anyway ;)

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    I don’t have multi-user library maintenance experience in particular, but

    I think a library with multiple users has to have a particular consideration for them.

    1. Make changes in a well-documented and obvious way
      1. Each release has a list of categorized changes (and if the lib has multiple concerns or sections, preferably sectioned by them too)
      2. Each release follows semantic versioning - break existing APIs (specifically obsoletion) only on major
      3. Preferably mark obsoletion one feature or major release before a removal release
      4. Consider timing of feature / major version releases so there’s plannable time frames for users
    2. For internal company use, I would consider users close and small-number enough to think about direct feedback channels of needs and concerns and upgrade support (and maybe even pushing for them [at times])

    I think “keeping all users in sync” is a hard ask that will likely cause conflict and frustration (on both sides). I don’t know your company or project landscape though. Just as a general, most common expectation.

    So between your two alternatives, I guess it’s more of point 1? I don’t think it should be “rapidly develop” though. I’m more thinking doing mindful “isolated” lib development with feedback channels, somewhat predictable planning, and documented release/upgrade changes.

    If you’re not doing mindful thorough release management, the “saved” effort will likely land elsewhere, and may very well be much higher.

  • glad_cat@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Some companies use feature flags for this. You add features, sometimes add feature flags, and you slowly deprecate the old API at the same time.