David Scott Bernstein's Blog, page 20

July 19, 2017

Pathologies of Redundant Code

Remember Y2K? Before the turn of the millennium, computer storage was at such a premium that years were stored with only the last two digits so the year 1989 was stored as 89. This worked fine—up until the end of 1999, at which point the year returns to 00 which many programs would interpret as 1900 rather than 2000. Hundreds of millions of lines of code had already been written and in were in operation. All of it had to be scanned to make sure that dates were handled correctly. This turned out to be a massive effort but in the end, the critical systems that needed to be updated were updated and our civilization didn’t descend into anarchy.


I knew people who worked on systems that were affected, but they had built their own date libraries years ago that already addressed these concerns. Their software was good to go, but that wasn’t true in many of the banking programs I worked on back then.


I understand if it’s the early 1970s and memory is at a premium so you want to store dates as efficiently as possible under certain conditions. Looking at the timeshare price sheets from back then, you could spend a thousand dollars a month renting one thousand bytes of data—only 1K! Today, Google gives you a terabyte for free. That was far more than existed in all the computers in the world back then.


Kind of puts things in perspective, doesn’t it?


There is a lot in redundancy in code out there. We don’t have much of a forum for sharing what we’ve learned. How many times have software developers “discovered” the same valuable techniques?


When I talk about redundancy in code, I’m not just referring to redundant behavior or redundant state. Redundancy can exist at many levels. We can have redundant relationships, redundant concepts, redundant construction, redundant conditionals, and so on. Each time a redundancy is introduced into the system it degrades it just a little bit. When this is done over and over again, a system becomes viscid and difficult to work with. It’s hard to make changes because many areas of the code are impacted by a single change. This is a sure sign that you have redundancy and split functionality in your system.


Split functionality is even worse than redundancy. Instead of repeating a process in multiple places, split functionality puts different parts of the same process in different places, requiring that all the pieces be in sync in order for the process to work. To me this is a particularly odoriferous code smell because the process breaks if you touch any piece of it.


The solution is to refactor and bring the pieces together. Organize the pieces by how they’re varying. If several things vary together, organize them so they’re located together. This makes code easier to read and understand and reduces the chance that things can get out of sync.


Software is like Russian nesting dolls and good software has many layers. Each layer of abstraction provides the opportunity for an additional layer of indirection. But again, each abstraction must be represented in our system only once so that there is only one place to extend in the future.

 •  0 comments  •  flag
Share on Twitter
Published on July 19, 2017 08:21

July 12, 2017

Quality Code is Nonredundant

The last code quality that we’ll discuss in this series, the “N” in CLEAN, calls for code to be nonredundant. On its surface it seems pretty straightforward but there is some subtlety to redundancy as well.


It’s easy to see why redundancy in code is a bad thing. It’s a duplicated effort. It means that when a business rule changes, there are multiple places where that change has to be applied and that significantly drives up the cost of maintenance. It also makes code harder to read and understand.


Unfortunately, there is a lot of redundancy in code. I see it all the time when I review the code from my clients. Sometimes it’s obvious and you can spot the duplication immediately. Other times, it’s less obvious but upon close examination we find that we’re basically trying to do the same thing in more than one place in the code.


We like to use the “once and only once rule.” When defining processes or business rules, we want to define it in one place so it’s easy to reference and access in the future.


Redundancy also has a subtle side that can be very difficult to pick up on. Perhaps we’re doing two processes that are mostly the same but some of the steps are different or one process has some extra step. There are design patterns that help us manage these types of variations and often, when we can see the pattern and apply it in our designs, we can get rid with the redundancy and improve our code quality.


I can tell you as a writer and an author that redundancy is death. There is no quicker way to bore, shut down, or shut off your reader than to repeat yourself. The same thing is true in code.


In good object-oriented programs there should be one authoritative place where each decision is made. These decisions often happen in factories, which are objects responsible for the creation of other objects. By moving business rules to object instantiation time rather than object execution time, it allows us to centralize decisions and limits the number of permutations that code can take so that behaviors are more reliable and software is more testable.


There are several creational design patterns that help eliminate redundancy in code. One example is the Gang of Four’s abstract factory pattern. The intent of the abstract factory pattern is to “provide an interface for creating families of related or dependent objects without specifying their concrete class.”


For example, let’s say I have an ecommerce system and when a sale is generated I want to create an invoice, a packing slip, apply tax, etc. but I want to do these things differently depending upon which country I make the sale in, so I want to work with families of objects. My abstract factory will return one set of objects if the sale happened in the United States but another set of objects if the sale happened in a different country, say the Netherlands.


If I set things up correctly, then as I begin to do business in new countries, I only have to update the abstract factory and add new variation to existing structures. All of the code that uses these variations—invoicing, accounts payable, and so on—can work without any modification as we start to do business in new countries. This technique not only helps eliminate redundancy but also allows for systems to be extensible to new variations in the future.


But not all redundancies are easy to spot. Some get hidden behind the best of intentions. If I have the time, I’ll try to figure out these subtle forms of redundancy because when I do I often find that the code becomes much clearer and I discover patterns I didn’t see before, which helps me improve the design of the code.


This is the worst part of redundancy, that it hides the truth, that it creates a barrier to understanding. If we want our code to be clear, we should strive to remove redundancy at all levels.

 •  0 comments  •  flag
Share on Twitter
Published on July 12, 2017 08:18

July 5, 2017

Assertiveness and Testability

It’s quite difficult to test inquisitive code. Very often the results of running a piece of code can only be found in another object. Therefore a test must use a separate entity, which we typically call a “spy,” to monitor the external object and validate that it was called correctly. This adds unnecessary complexity to the test and it means more objects are collaborating than need to be.


When objects are assertive they have everything they need to fulfill their tasks. If they don’t have the facilities to do what they need to do directly, they delegate other objects to do it for them.


Delegating to another object to perform a function is very different than being inquisitive with other objects. The difference has to do with who has the responsibility. Inquisitive code gives up responsibility and assertive code keeps it. Because assertive code is responsible for itself, it’s straightforward. Tests test the responsibility of code and can remain mostly autonomous. This simplifies testing scenarios quite a bit.


When tests are assertive and based on acceptance criteria, the code that’s written to make those tests pass is also assertive. It comes down to writing code in order to implement a feature and making an acceptance criteria pass versus implementing a specification. Implementing a specification can be vague but when code is assertive and has well-defined acceptance criteria, it’s straightforward to write tests around it. This means we know when we’re done and developers spend less time gold-plating and more time implementing valuable features.


Sometimes, writing a test for a behavior is the main clue that tells me I put the behavior in the wrong place. If my test requires spies, which are mocks that observe objects that are not the objects under test, then that almost always tells me I’ve put the behavior in the wrong place. Instead of trying to use a spy in my test I’ll try to redesign my code so that the behavior is in a better place.


How do we deal with inquisitive code? The answer once again is to refactor. The main refactorings to support assertiveness in code are move method, extract method, and extract class.


Very often, objects have the wrong behavior or they have too much behavior. The way to deal with that is through a form of mitosis. Quantum physicists have a term that comes to mind for this. They call it “spooky action at a distance.” Physicists use the term to explain or describe quantum effects but I think it’s also an apt term for inquisitive or unassertive code.


When you want to divide the object up into two or more other objects there are various techniques for doing this. One approach is to separate out the data of an object in terms of its usage. If one set of methods uses one set of data and another set of methods uses another set of data, then you can typically extract those two sets into two separate classes. This may or may not make sense. Fortunately, I have come across a rule of thumb that helps me in situations like these, and in fact, in most situations. Whenever I have a tradeoff to make in programming, a choice between one implementation or another, I simply ask myself which approach is easier to test.


This is an important question both philosophically and practically. It’s important practically because I believe in testability and I don’t consider code written unless it also includes automated tests that exercise its behavior and can validate it works as expected. But this is also important philosophically because as you may have noticed there is a strong relationship between these code qualities and testability.


As we improve each one of these code qualities, we’re also improving the testability of our code. This is much more than mere coincidence. One of the most important aspects to consider when developing software is how testable the design is, or how testable the implementation of the design is. Understanding that question more than anything else has led me to understand what good software development is all about. This one singular idea has been a key distinction that has helped me identify better ways of building software. Use it well.

 •  0 comments  •  flag
Share on Twitter
Published on July 05, 2017 08:18

June 28, 2017

Pathologies of Inquisitive Code

Objects that are not assertive are by default inquisitive. They are either inquisitive or they are mute. They don’t interact with the outside word at all. If an object interacts with other objects, the real question with regard to assertiveness is, “Who’s in charge?”


One object can delegate a task to another object but still be in charge of that task and therefore still be assertive. It’s when an object gives up its control of a task, when it doesn’t have access to the resources it needs to fulfill its tasks, that it has to become inquisitive.


Inquisitive code is poorly performant. If you look at a sequence diagram, which is one of the 14 UML diagrams, you’ll see that the sequence of calls for inquisitive code is more involved than the sequence of calls for assertive code. This is because inquisitive code has to access other objects through their public interface in order to access their private state, which it needs in order to accomplish its tasks. When one object makes repeated calls to other objects to access its state, it can degrade performance even as it unnecessarily increases complexity, making the code more difficult to read and understand. Business rules are then spread out across multiple objects and it becomes harder to define boundaries for objects when the objects themselves become less and less well-defined.


The very essence of what an object is centers around its responsibilities. Inquisitive objects end up being poorly defined, which distorts the overall object model and makes it difficult to figure out where other responsibilities belong in a system.


Martin Fowler describes some of the “code smells” related to inquisitive code with funny and memorable names such as, “feature envy” where one object is constantly accessing the features of another object that should be part of the responsibilities of the calling object, and therefore part of the calling object. He also uses the term, “inappropriate intimacy” for inquisitive code because when one object calls into other objects in this way, it often becomes bound to those other objects’ implementation. This kind of interdependence drives the cost of change up because when one piece of code is changed the dependent pieces break and must also be changed to be kept in sync.


When code is assertive instead of inquisitive, all the behavior is in one place so it’s far easier to read and understand.


Code that is inquisitive also has encapsulation issues because the state and behavior that it should possess is somewhere else and often the state that needed to be shared between these behaviors in separate objects is better left private.


All of these issues are bad but as we start to scale up, writing multi-threaded issues and multi-user systems, these issues become paramount. Inquisitive code is more difficult to make multi-threaded and to run on high availability servers. If high performance is important then writing assertive code is also important.


In a lot of ways, inquisitive code is a holdover from procedural programming. Procedural programs can be inquisitive. They can take a global perspective. But in object-oriented programs, we want to create behaviors through the interaction of objects rather than using explicit logic because it’s more maintainable and extendable to do it this way.


Assertive code supports this, inquisitive code doesn’t.

 •  0 comments  •  flag
Share on Twitter
Published on June 28, 2017 09:54

June 21, 2017

Quality Code is Assertive

The next code quality, the “A” in CLEAN, stands for assertive.


I don’t hear a lot of people talking about it but I think it’s an important code quality. We want the software we write to be assertive, to be in charge of its own state. We want objects to tell the world things rather than ask the world things. Why? Because this fundamentally supports object-oriented programming.


We need to think of objects as autonomous units that are in charge of themselves, responsible for their own state, and capable of managing themselves. This is really important in making the object-oriented paradigm work. When the responsibilities of an object get spread out among multiple objects in a system it’s hard to understand what’s going on in the code. It makes maintenance difficult because we’ll have to touch several objects in a system. That usually means reading through a huge amount of code.


One way to think about assertiveness in code is to put all the behavior related to the data of an object with that object. Doing this makes the code more performant, makes it easier to test, makes it easier to understand, and makes the system more modular.


But sometimes, a process requires accessing data from more than one object. In this case, you may want to pick the object whose data it uses the most, or pick one for some other reason. It’s just a rule of thumb: Keep behavior with the data it consumes in order to keep the code assertive. That is, not constantly calling out to other code to access its state.


Assertive objects cannot be created in a vacuum. If all the other objects in a system are inquisitive then new objects will likely also be inquisitive. If a system is generally assertive, it should be rather straightforward to introduce other assertive objects. This is one of the other main benefits of having an assertive system. It’s much more straightforward to extend the object model because assertive objects are autonomous. They are in charge of their own state so they’re more easily migrated, extended, removed, or changed.


Assertive systems put the responsibilities of the system in the right places so that you can look at the object model and understand where a system’s behavior is. It thinks about the objects on the system as actors carrying out their task and implementing features.


Assertive systems are straightforward to understand because the behaviors in a system are in the right place, making code easier to read and understand.


Note that assertive code is different than procedural code. Procedural code is also in charge but it’s in charge of everything. Procedural programs take a global perspective. They are the hands of God coming down from on high, commanding the system to do their bidding. It’s an easy to follow but somewhat unrealistic programming paradigm. It’s easy to follow because we tend to think in a straightforward narrative way, but it’s a bad programming paradigm because it doesn’t scale well.


We’ve seen this time and time again in procedural programs. They’re fine for small tasks or services but as soon as we start to scale them up and build enterprise systems, we find that those systems increase in complexity rather quickly, ending up in unmanageable states. This is why we went to the object-oriented paradigm in the first place in the early 90s.


Well, I went to the object-oriented paradigm in the early 90s. It might have taken the rest of the world a few years more. I’ve been on the leading edge of software development for about thirty years and although I’ve been doing object-oriented programming for over a quarter of a century, I keep finding deeper and deeper levels of understanding for how to build systems with objects.


As I learned to build more autonomous objects, the systems I have built are more flexible and more agile.


One way to tell when code is lacking in assertiveness is when an object makes excessive use of accessor methods in other objects. When objects call get and set methods on other objects they’re trying to control the other object’s state. If objects were people in the real world, this would be illegal. Even though it’s the virtual world we should respect an object’s autonomy.


That’s really what assertiveness is: Allowing objects to be objects. Who could argue with that?

 •  0 comments  •  flag
Share on Twitter
Published on June 21, 2017 15:39

June 14, 2017

Encapsulation and Testability

Poorly encapsulated software is hard to test because without clear boundaries, what we actually test becomes significantly larger than what we need to test. This makes our tests run slower and it also makes them brittle so our tests are harder to maintain.


When code is unencapsulated, it can be hard to lock down behaviors in a system that we want to test. This can get so bad that we may not even be able to use a programmatic interface and must resort instead to simulating the user interacting with the system.


Manual testing or even automating user input to drive testing is a bad idea. It’s far too high a level and it ties your tests to your user interface, making them brittle. It’s far better to provide a programmatic interface that can be used to test code.


Unit testing is the first type of testing we should think of because it’s the simplest and also the most cost-effective.


Find problems early, or better yet, set up the system so we just can’t make mistakes. Encapsulation is like that. Encapsulation is a promise that a boundary is created and that nothing will penetrate that boundary. We can define an object that has public parts and private parts. The public parts can be accessed by anything or anyone but the private parts are internal, nothing on the outside can access the private information or behavior inside.


This guarantee in software languages allows us to create software that is both reliable and secure. Of course, by convention, instance data that an object holds should be marked private so that no other object can access it directly. If outside objects do need access to that data then we will provide public getters and setters.


We may, for example, want to serialize access to a particular resource so we’re granting access to only one request at a time, or we may want to just keep track of the requestors, or keep account of them, or whatever. The object that holds the state gets to decide—and that’s the point of object-oriented programming. We want objects to encapsulate their own state and be in charge of it, that is to say to contain the behaviors that access that state, which is the next code quality that we’ll be talking about: assertiveness.


Testable code tends to be well-encapsulated. It hides implementation details and can validate that behavior is correct. Testable code is code that can be tested at the unit level. When code is built with tests in this way there’s less need for other kinds of tests. A lot of the QA testing, scenario testing, and other types of non-automated testing can go away. We’re then left with a suite of tests that have all the characteristics we need: they run fast, they give the right level of feedback, and they support refactoring—all good qualities in a test base.


Unit tests run fast because we’re only testing what we need to. If the tests were written well and written to be unique, unit tests also provide the right level of feedback.


And finally, when they’re written to test behaviors rather than implementations, unit tests support refactoring. If we test behaviors and then refactor the code so we’re changing the design but not changing the behaviors our tests shouldn’t break.

 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2017 10:53

June 7, 2017

Pathologies of Unencapsulated Code

Unencapsulated code is code that exposes its implementation or references how it does something. This can happen in subtle ways, so we have to be careful to create strong contracts through our method signatures.


Poorly encapsulated code is very difficult to work with. It can quickly sprout new bugs at the slightest change. Lack of encapsulation is one of the most difficult code qualities to deal with because it widens the field of what could possibly go wrong.


People call unencapsulated code “spaghetti code,” but I find it far from appetizing. Code that isn’t well-encapsulated isn’t well-defined, and that’s because the programmer who wrote it didn’t take the time to think about creating a service that could be extendible in the future.


We see tendrils of unencapsulated system reach everywhere. This creates systems that are highly coupled and very difficult to test at a unit level. Perhaps the worst form of unencapsulated code is a public static global variable. These variables can be changed at any time by anyone, which means they’re unreliable. We can’t write automated tests for them. All sorts of subtle interactions can happen between unencapsulated code and the rest of the system so both testing and reproducing bugs that happen in the field can be difficult.


Sometimes people think that just because the implementation is exposed doesn’t mean people will abuse that information. In my experience, this is not true. When you expose implementation, people will try to take advantage of it to improve optimization (or whatever). But generally, exposing implementation details is a bad idea.


A lack of encapsulation is the number one cause of the “ripple effect” in software where one change causes several problems in seemingly unrelated pieces of code. This kind of phantom coupling exists because we make subtle assumptions about how we build software that sometimes turn out not to be true.


Think about creating services that hide their implementation details as much as possible, that accept what the client wants to give us, and returns what the clients wants to receive. Think about creating strong contracts for services that remove ambiguity and give informative error messages. In addition to making your data private, have good reasons to offer public getters and setters and keep data scoped to the behavior that needs it.


Languages like Java and C Sharp have four access modifiers: public, private, protected, and package, which are used when no other modifiers are stated. Scope all data declarations and behaviors as tightly as possible. If only the class needs that item, make it private. If it is to be shared among subclasses then make it protected. If it is needed by other classes in the same package then leave off the access modifier and the system will assume package protection. Only if these other access modifiers are too restrictive should you make an item public.


When code is unencapsulated, business rules are spread out and when the business rules change, code has to be modified in several places, which drives up the cost of change.


I like my objects to be modest. I get embarrassed for them when they expose too much of their implementation. Only expose what you need through your method interfaces and hide everything else.


Unencapsulated code can be a real mess and really hard to work with. Deal with it by extracting classes and methods through refactoring. This can be a lot of work but it’s worth it.

 •  0 comments  •  flag
Share on Twitter
Published on June 07, 2017 08:15

May 31, 2017

Quality Code is Encapsulated

To encapsulate something is to envelope and protect it. Encapsulation is a fundamental facility that every programming language supports at some level.


Encapsulation is about hiding implementation details so ideas can be expressed at higher levels and to protect those details from other parts of the system.


To me, most fundamentally, encapsulation means hiding implementation details. Instead of expressing how to do something, we express what to do. I like to say that encapsulation is hiding the “how” with the “what.” Encapsulation is an important concept and an important code quality because it gives us the ability to change the implementation details without affecting other parts of the system.


We’re packaging bits of functionality that include both data and behavior, and that functionality is supported through the interface it represents. We should make no assumptions about any other parts of the system. The more implementation details that we can hide, the freer we are in the future to change our implementation without affecting others.


Sometimes we unwittingly leak implementation details, and sometimes we make assumptions about a system that the author may or may not have intended.


In my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, I talk about well-capsulated software being designed from the outside-in perspective rather than the inside-out perspective.


Outside-in programming is when we design a feature or service from the consumer’s perspective. We think about what the consumer wants to give us and what the consumer expects back. We name things, in terms of the services we provide, from the consumer’s perspective. This helps us build cleaner APIs and stronger contracts. This is also the fundamental perspective that we take when we’re doing test first development.


Outside-in programming is all about defining boundaries. It’s about focusing on what we want to accomplish rather than how we plan on accomplishing it and it helps us build more autonomous systems that are easier to test and extend in the future.


When I talk about encapsulation I don’t just mean hiding state or behavior. There are many things that can be hidden. Even the way we think about and refer to things can hide or reveal implementation details. When we make assumptions about those details and then those details change, we have to also update our code that depends on those assumptions. Any parts of the system that depend on implementation details will have to be changed when those details—business rules, the criteria that drive decisions in our business processes—change. When our business processes need to change, we want to be able to respond accordingly.


In object-oriented programming, we want to hide as much of the system from itself as possible. Like the saying goes: “encapsulate by policy, reveal by need.”


In other words, hide as much as we can and only reveal what’s needed to others in order to accomplish a task.


With regards to objects, I am always on a need-to-know basis. Design patterns are all about hiding different things. For example, there is a group of patterns that are all about hiding varying behaviors, another group of patterns that are about hiding the order of behaviors, or the number of behaviors, and so on. In fact, I believe that one quality of great software developers is their ability to hide as much as possible in their code while still accomplishing the task.


Being able to encapsulate the right things comes from a deep understanding of the problem domain. To encapsulate code, start by creating strong contracts that are enforced through your APIs. Keep state private and hide implementation details as much as you can.

 •  0 comments  •  flag
Share on Twitter
Published on May 31, 2017 07:07

May 24, 2017

Coupling and Testability

I can learn a lot about the kind of coupling in a system by listening to how people talk about testing it. If someone says, “I couldn’t test that without instantiating half the system,” then I know they have some coupling issues. Unfortunately, most systems I have seen—and most systems I know other people have seen—have an enormous amount of poor coupling. That, more than anything else, makes it really difficult to test code.


Code that directly calls a service cannot be decoupled from that service. That may not be a problem when you think about normal usage, but I want our software to be able to run not just in normal usage but also in a test mode that’s reflective of normal usage but more reliable and more robust.


In order to implement automated testing in a system, our task must be completely reliable. Which means the code we write that depends on some external code has to be built in such a way that when we’re testing it, those external dependencies don’t need to be present. Fortunately, there’s a whole discipline and set of techniques around this that we call “mocking,” working with test doubles, or faking implementation.


Bad coupling is one of the places we feel the pain of testability the most. When a class has multiple dependencies a developer who wants to test it has to write a test harness and mock out all the dependencies. This can be very difficult and time-consuming. But there are techniques for writing code to interface with dependencies that make it easy for us to inject test doubles as needed.


One technique for making code that’s coupled to external dependencies easier to work with and to test is to separate out locating a resource from using it.


For example, if I write an API called ParseDocument(URL Webpage) that takes the webpage as input, in order to test it I need to bring up a web server that supplies that webpage, which is a lot of work just to test the parsing of the document.


So instead I could break this task into two steps: first locating the document on the web and then passing the document in to be parsed.


By separating this task out into two steps, I gain the flexibility of testing just the parsing of the document, which is the code that has my business rules. I can separate this code from the code that locates a document on a web server and returns it because it’s fair to assume that code was tested by the provider of the operating system or language and it works just fine. I certainly don’t feel the need to test it myself. But if I write an API that requires me to pass in a URL in order to parse the document, I’ve now required it to test a bunch of stuff I really don’t need or want to test. In fact, the whole idea of injecting dependencies as needed instead of just newing them up and using them has tremendous potential for decoupling systems.


I’m always surprised when even the most brilliant developers I know don’t really pay attention to these things, when they’re among the most fruitful areas for allowing us to build maintainable systems.

 •  0 comments  •  flag
Share on Twitter
Published on May 24, 2017 08:35

May 17, 2017

Pathologies of Tight Coupling

We continue our conversation on code qualities with a discussion of why tight coupling is undesirable. Again, I want to stress, as I did in the last post, that we’re not striving for no coupling in a system. A system with no coupling is a system that does nothing. We want the right kind of coupling in the right places that gives us the flexibility we need but also allows us to apply the business rules we need in the right places.


Perhaps the worst kind of bad coupling in a system is global variables. Don’t get me started. I hate global variables, and I see a lot of them everywhere. They show up in all sorts of guises so most of you wouldn’t even recognize them as global variables. Do you use read-write singletons? Or service locators? Or value objects? These are all other names for global variables.


When I ask developers in my classes what they hate about global variables they usually tell me they hate that any one can change at any time. But to me, this is only a minor problem. To me, the big problem with global variables is that they hide dependencies. I can’t just look at the method signature and know what the method depends on. Somewhere inside that method it’s going to make a call to a global resource and I will only know that if I read the method. So using global variables in a system requires anyone who wants to understand that system to read all of the code. But if I’m explicit about all of my dependencies then fellow developers can simply read my method signatures and see what my methods depend on.


Of course we can’t and don’t want to eliminate global resources. It’s not that global state is bad. It’s just that it has to be managed and it also has to be understood. We should only use it when absolutely necessary and when it’s being used to represent some kind of a global resource.


Some things make sense as global resources such as a global logging service as a write-only resource, a global registry as a read-only resource, and sometimes we even need to have resources that are read-write, such as a global registry of services. But if they’re global and they’re being shared among many clients then we have to design for that. There are many schemes and design patterns that help us with this and we have to find the right design for the particular usage we’re working with.


The other form of tight coupling that I see in programs all the time is what we call in the biz “magic numbers.” Magic numbers are numeric values that are used in code. Numbers in and of themselves don’t hold any context. The number 5 doesn’t mean anything other than signifying a quantity. So it’s far better to replace numbers with constants that represent the meaning of the number in the context for which it is being used. For example, MAX_USERS instead of the number 17. This makes the code far more readable and easier to work with.


These are basic, rudimentary services that a compiler offers and we should be taking advantage of them. We want our source code to be understandable when read, so I’ll either define constants within a class or define a class within a name space that just holds constants. This is a simple technique that makes code far more readable.


Another source of bad coupling that I see in a system comes from writing overly generalized method signatures. We try to address the needs of several different clients by creating an uber method that supports several different needs by passing in several different kinds of data. This is generally a bad practice. Instead, each type of client should have its own API with its own specialized interface that it can call as needed. Multiple client types should have their own entry points and their own specialized set of parameters. This helps segregate clients and limits accidental coupling. These front-end APIs can then share code internally on the back-end, as needed.


But perhaps the worst form of coupling to me is what I call split functionality. This happens when, for whatever reason, a single process is broken out into two or more parts and those different parts then have to be kept in sync with each other. If this form of coupling is broken, it causes a bug to happen. We can fix this type of poor coupling by bringing the parts together into one central place.


So, while too much coupling is bad, too little coupling can also be a problem.

 •  0 comments  •  flag
Share on Twitter
Published on May 17, 2017 09:12