Conservation of complexity
A significant part of what we do as programmers is manage complexity.
Broadly (and I'm simplifying), the complexity of the solution to a given problem is constant.
It's likely the solution will consist of multiple parts working together, and we decide which parts should be simple, with the knowledge that that will make other parts more complex.
Some examples:
Building a new service? Make it stateful and you've vastly simplified how it works. But scaling it and routing traffic to it is now a lot more complex because more of the system needs to know and share the state.
Building and running a multi-stage CI/CD pipeline? That's pretty complex. But now the steps the developers have to do to get code in to production is less complex.
Instrumenting your code to export relevant metrics and other information for debugging and troubleshooting? That's more complex than not doing it. But it makes future debugging and troubleshooting less complex.
Writing a piece of code that does many things with a lot of internal dependencies is fairly easy, but testing it is a lot harder. Breaking it in to smaller functions and using techniques like dependency injection can make the code more complex to write and for other programmers to follow, but is a lot easier to write tests for.
The general pattern is that you can make it less complex here, but the complexity over there goes up. It's up to you to make mindful decisions about how to trade off the complexity of different parts of the system.
Just like real-world counterparts — e.g., conservation of momentum — this only holds to your solution in isolation. Over time you can find that the complexity of your solution needs to increase, because external forces (e.g., changing business requirements) require the solution to change. Rarely (and happily) you'll discover that you can reduce the complexity of your solution because, e.g., some other system has implemented part of the solution, so you no longer have to.