Tuesday, March 5, 2024

An Open and Shut Case for the Open-Closed Principle

 The "O" from the SOLID principles of software architecture stands for the Open-Closed Principle (OCP). This principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. In other words, we should be able to add new features or behaviors to a software entity without changing its existing code.

Why is this principle important? Because it helps us achieve two main goals: maintainability and reusability. By following the OCP, we can avoid breaking existing functionality when we add new features, and we can also reuse existing code without having to modify it for different scenarios.

How can we apply the OCP in practice? One common way is to use abstraction and polymorphism. Abstraction means hiding the details of how something works and exposing only what it does. Polymorphism means having different implementations of the same abstraction that can be used interchangeably. For example, we can define an abstract class or interface that represents a shape, and then have different subclasses or implementations that represent specific shapes, such as circles, squares, triangles, etc. We can then write code that works with any shape without knowing its specific type or details.

Another way to apply the OCP is to use dependency inversion. This means depending on abstractions rather than concrete implementations. For example, instead of having a class that directly uses a database or a file system to store data, we can have a class that depends on an abstract data source that can be implemented by different concrete data sources. This way, we can change the data source without changing the class that uses it.

To illustrate the OCP in action, let's consider a simple example of a calculator application. Suppose we have a class called Calculator that performs basic arithmetic operations, such as addition, subtraction, multiplication, and division. The class has a method called calculate that takes two numbers and an operator as parameters and returns the result of the operation. The method looks something like this:

java
public class Calculator {

public double calculate(double num1, double num2, char operator) {
switch (operator) {
case '+':
return num1 + num2;
case '-':
return num1 - num2;
case '*':
return num1 * num2;
case '/':
return num1 / num2;
default:
throw new IllegalArgumentException("Invalid operator");
}
}
}

This class works fine for now, but what if we want to add more features to our calculator, such as trigonometric functions, logarithms, exponentiation, etc.? One option is to modify the calculate method and add more cases to the switch statement. However, this would violate the OCP because we would be changing the existing code of the Calculator class every time we want to extend its functionality. This would make the code more complex, error-prone, and difficult to test.

A better option is to follow the OCP and design the Calculator class in such a way that it is open for extension but closed for modification. We can do this by using abstraction and polymorphism. Instead of having a single calculate method that handles all the possible operations, we can define an abstract class or interface called Operation that represents any kind of mathematical operation. The Operation class or interface would have an abstract method called execute that takes two numbers as parameters and returns the result of the operation. Then, we can have different subclasses or implementations of Operation that represent specific operations, such as Addition, Subtraction, Multiplication, Division, Sine, Cosine, Logarithm, Exponentiation, etc. Each subclass or implementation would override the execute method and provide its own logic for performing the operation. The Calculator class would then have a method called calculate that takes an Operation object and two numbers as parameters and calls the execute method of the Operation object. The method would look something like this:


java
interface Operation {
double execute(double num1, double num2);
}

public class Add implements Operation {
@Override
public double execute(double num1, double num2) {
return num1 + num2;
}
}

public class Calculator {
public double calculate(double num1, double num2, Operation operation) {
return operation.execute(num1, num2);
}
}

By adopting this approach, we adhere to the Open-Closed Principle, allowing us to introduce new operations without modifying the existing code of the Calculator class. Each new operation is encapsulated in its own class, promoting maintainability, readability, and ease of extension. This design also facilitates testing, as each operation can be tested independently, contributing to a more robust and flexible system.

Peace,

JD

Monday, March 4, 2024

Decoding Dependency Inversion: Elevating Your Code to New Heights

Greetings coding enthusiasts,


Today's discussion delves into the profound concept of Dependency Inversion, unraveling its mysteries and exploring how it can elevate the sophistication of your codebase.


Unveiling the Core of Dependency Inversion

At its essence, Dependency Inversion is a concept championed by Robert C. Martin, advocating the inversion of traditional dependency flows within a system. This principle encourages a paradigm shift where high-level modules are independent of low-level implementations, fostering a codebase that is more flexible, adaptable, and maintainable.


The Dependency Inversion Principle (DIP)

Situated as one of the five SOLID principles of object-oriented design, the Dependency Inversion Principle asserts two fundamental guidelines:


High-level modules should not depend on low-level modules. Both should depend on abstractions.

Abstractions should not depend on details. Details should depend on abstractions.

In simpler terms, Dependency Inversion calls for a departure from the traditional structure where high-level components dictate the behavior of low-level components. Instead, both levels should depend on abstractions, paving the way for seamless modifications and extensions.


Breaking the Shackles of Dependency Chains

In a conventionally tightly coupled system, the rigid hierarchy of high-level modules dictating the behavior of low-level modules impedes the adaptability of the codebase. Dependency Inversion liberates the code by introducing a layer of abstraction. This abstraction allows both high-level and low-level components to depend on common interfaces, eliminating the need for direct dependencies.


The Pivotal Role of Interfaces

Interfaces emerge as key players in the application of Dependency Inversion. By defining interfaces that encapsulate behaviors, high-level modules can interact with low-level modules through these abstractions. This indirection facilitates communication through well-defined contracts, reducing dependencies on specific implementations.


Practical Implementation of Dependency Inversion

Consider the common scenario of database access in a web application. Rather than high-level business logic tightly coupling with a specific database implementation, Dependency Inversion advocates creating an interface (e.g., DatabaseRepository). Concrete implementations (such as MySQLRepository or MongoDBRepository) adhere to this interface, allowing the high-level logic to interact with any database without direct dependencies.


Unlocking the Benefits of Dependency Inversion

Flexibility and Adaptability: Dependency Inversion decouples components, providing the freedom to switch out implementations without disrupting the overall system.


Testability: Abstracting dependencies through interfaces simplifies testing, allowing for the use of mock implementations and ensuring the isolation of units during testing.


Reduced Coupling: Dependency Inversion reduces the coupling between different parts of your code, fostering a modular and maintainable codebase.


Parting Thoughts

In embracing Dependency Inversion, we empower our code to gracefully adapt to change and evolve over time. Adherence to the Dependency Inversion Principle molds architectures that are resilient, testable, and inherently scalable.


As you navigate the intricate landscapes of software design, consider the transformative influence of Dependency Inversion. May your abstractions be robust, and your dependencies, elegantly inverted.


Happy coding!

JD

Cheers,

Unraveling Hexagonal Architecture: A Blueprint for Code Harmony

Hello fellow developers,

Today, let's delve into the captivating realm of Hexagonal Architecture. As software craftsmen, we're constantly seeking design paradigms that not only make our code maintainable but also allow it to evolve gracefully with changing requirements. Enter Hexagonal Architecture, also known as Ports and Adapters.

Understanding the Hexagon

At its core, Hexagonal Architecture revolves around the concept of a hexagon – a shape with six equal sides, each representing a facet of your application. This model focuses on decoupling the core business logic from external concerns, resulting in a more flexible and testable system.

The Hexagon's Heart: The Core

Picture the hexagon's center as the heart of your application – the core business logic. This is where the magic happens, where your unique value proposition resides. It's independent of external details such as databases, frameworks, or UI elements. Here, your business rules reign supreme.

Ports and Adapters: The Boundary

The sides of the hexagon represent the application's boundaries. We have "ports" for interacting with the outside world and "adapters" for implementing those ports. Ports define interfaces through which the core communicates, while adapters provide concrete implementations, bridging the gap between the core and external components.

Adapting to the Real World

Consider an example where your application needs to persist data. Instead of embedding database code directly into the core, you create a port, say PersistencePort, defining methods like save and retrieve. Adapters, then, implement this port – one adapter for a relational database, another for a document store, and so on.

Embracing Dependency Inversion

Hexagonal Architecture thrives on the Dependency Inversion Principle. Rather than depending on concrete implementations, the core defines interfaces (ports) that external components (adapters) implement. This inversion of control empowers the core, reducing its reliance on volatile external details.

Hexagonal Harmony in Action

Let's visualize this with a scenario. Imagine your application is a bakery management system. The core handles crucial bakery operations, like creating recipes and managing inventory. On one side, you have a UI adapter allowing interaction through a sleek web interface. On another side, a persistence adapter ensures your recipes endure, be it in a relational database or a cloud-based storage solution.

Benefits Beyond Symmetry

The advantages of Hexagonal Architecture extend far beyond its elegant symmetry. Your code becomes more modular, promoting easier maintenance and testing. The core remains blissfully unaware of the external forces acting upon it, fostering adaptability to changes without jeopardizing the essence of your application.

Parting Thoughts

In the grand tapestry of software design, Hexagonal Architecture stands as a testament to elegance and adaptability. Embrace the hexagon, where your core business logic resides, surrounded by ports and adapters that dance in harmony, ensuring your application's longevity in an ever-evolving digital landscape.

Until next time, happy coding!

Cheers, JD

Friday, February 18, 2022

Ports and Adapters, Part 2

Ports and Adapters part 2

Approach, not reproach

Many programming tutorials are difficult to follow.  I do not necessarily expect this one to be any different, but I'll try.  I'm going to start with relatively broad concepts, any of which may lead to a new post drilling down into that topic in further detail one day.

Take a client-centric view of the solution

We have nobody to blame but ourselves.  We've all done it.  You hit Google, or StackOverflow, and you grab a chunk of code that interfaces with some library we want to use.  Then we change the shape of our code, introducing abstractions that don't really gel that well with the ones around it, in order to make that adopted code work.
Don't do that.

Design the API you want to use

Think about things from your own application's point of view.  Do you need a database client?  Or do you need a way to get data based on a few 'key' criteria?  What level of abstraction do you really want to be working with as you write your code?

Test first

I prefer a TDD (Test Driven Development) approach to writing code.  When one first encounters the notion, it seems backwards.  It was certainly very different from practices I'd spent years on.  But in the end, following this discipline beats the hell out of attempting to add unit tests to the tangled messes many of us naturally weave otherwise.  I've been called good (or sometimes better than that) at what I do, but when I look around at my code, what I see is still more of a mess than I'd like.
TDD really helps with that.  I think that topic deserves a post of its own.
Use your concept to build some code.  Mock the responses you intend your own API to provide.  Call your own API.  It's all a kind of unit/integration testing hybrid, it's all provably correct.  It also works the way you think as a developer, long before you've actually decided which database or messaging system you'll be using.

Adapt your concept to a real-world library

When you've achieved that level, it's finally time to replace the mocked service provider with a real one... And testing that can appear very difficult.  I'll discuss techniques for that in future posts.

But first, a word about using code that demonstrates use of a library:

There's nothing wrong with learning how to use a library that way.  It should teach you the general usage patterns, pre-requisites, etc.  But that doesn't mean you should copy that demo function into your production application and start to use it!

At a bare minimum paste it into a new source file.  Better yet, use a test-driven approach to create a new file that implements the subset of capabilities you need.
Enjoy!

That's really all you have to do to get dependencies out of your own code.  It sounds easy, and sometimes it is.  Actually, a lot of the time it is, once you figure out how to think about it.

Monday, January 31, 2022

Ports and Adapters part 1

Ports and Adapters part 1

A love story

Today I want to begin a series about a technique which has helps to isolate code from changes in implementation details.  Sometimes those details may seem central to the code in question, but many times it just isn't.

I'm going to start with a little story.  I don't know if it's interesting or not, but it's all true.3

Many years ago, back when my razor stubble was brown and I was a hired gun programmer, I was responsible for maintaining a corporate security library.  This was an important library.  Dozens of applications depended on it to authenticate and authorize corporate users.

There's just one problem:  It relied on an obsolete LDAP library which was no longer being maintained, and that had to change.

This was a pretty frightening task.  It absolutely had to work.  This was before the idea of providing such a thing as a service had become popular, and the library was statically linked to all those applications.  A mistake meant a patch, a patch that would get attention, and what's more a patch that would have to be quickly rolled out to all affected applications.

I hate attention.  Well, bad attention anyway.  I suppose I'm also not really a big fan of emergency deployments. 

I had to formulate a plan of attack.  As per my usual practice, I banged my head against things until I eventually worked out what I could have found in a book.  Let me walk you through my process:

Step 1:  Tests

First, I built a comprehensive suite of unit tests that validated all of the library's functions.  The tests were a little more integration-ey than I would build today, but when things were green?  I had great confidence that I would have a successful build.  We were also adopting Jenkins at the time, so every code commit was tested and built automatically.

Step 2: Isolation

Next up, I isolated all of the library calls behind interfaces.  That was a lot of effort, but with the tests backing me up, I was able to keep everything working just as before.  I did have to add a factory to instantiate the main library class, but kept that hidden behind a facade that looked unchanged to the API users. 

Step 3: In with the new

Now I started on the new code.  By sticking to the interfaces which allowed me to accomplish step 2, I was able to swap back and forth between the fully functional production implementation and the one under development.  All I had to do was change the name of the class from 'new obsoleteImplemenation()' to 'new unfinishedImplementation()'.  I could run the test suite and get a pretty solid idea of how far I had come and how far I had to go.

Step 4: Risky business

I realized that this was dangerous territory.  If I shipped a library that had an unexpected bug under load or under some kind of unexpected error condition, there could be really big implications.  The user base included both internal and external entities, offering lots of exposure.  That was too big a risk, so I had to do something to mitigate it.

Step 5: On reflection, this is a good idea

I was working in Java, so I decided to take advantage of reflection to create the class.  If you aren't familiar with the concept, it is just a way of creating an object using the name of the class as a string value.  That came in handy, because now I could just have the two different class names for the now-isolated library and use either one live at run-time.  To make it as safe as possible, I initially defaulted to the old library, but gave the clients an option to set an environment variable to enable the new one.

Fortunately, I had good ties with developers for several of the other projects, and I was able to cajole them into testing and then deploying with the optional library enabled.

Step 6: Once more into the breach

Once I had good feedback, I felt safe making the new library the default, while allowing an environment variable to enable the old library.  I kept that around for a good long while, until I was sure it was safe to get rid of it.

Phew

That was a lot of effort and a lot of worry as well. We can do a little better than this.  In fact, we can do a great deal better than this, although frankly I was proud of my accomplishment.  What I'd stumbled into was a more general idea around dependency isolation.  You'll hear terms like 'hexagonal architecture' used to tell you what to do.  

But how do you actually DO it?

That's what I want to talk into in the next few posts.  Swapping dependencies can be a giant pain point, but it does not have to be.  The most difficult thing, really, is adapting how you think about systems.  The trick is to stop trying to adapt our code to someone else's idea of how an API should work.  That is OK for quick demonstrations or tutorials, but it's not how I believe we should build systems.

Instead, when we require a capability, we should design an API for that capability ourselves, whether or not we intend to implement it.  The design should be harmonious with our existing system, or at least follow similar conventions.  It should not feel tacked on.

Integrating external libraries deeply into our own code base is a code smell.  I want my code talking to my own libraries, which will act as adapters to the third party code I want or need to use.  Let those adapters have the weird stuff that thinks the way other people do.  My job is to make those adapters conform to the expectations I've set for/with my API design.

Next time around, I'll dig a little deeper into what I mean by designing an API ourselves, and what the benefits (and costs) are.

Peace,

JD

Friday, January 28, 2022

Check in, check out, Daniel-san

When you start to work on projects with more than one developer, you suddenly find yourself having to solve what sounds like a very simple problem:  Sharing code.  At its heart it IS simple, but the reality is that you have to take a disciplined approach.  Trying to do this without using a tool designed for the job is a likely path to madness, and it's a bit mad not to do so given that the tools you need are widely available at no cost.  Every professional shop, open source project and even many independent individuals makes use of some kind of version control system.

Version control at first feels like a burden.  If anything though, it is quite the opposite.  Knowing that your old code is out there, ready to be brought back into your project anytime you need it?  That's gold.  It frees you up to experiment, to explore, to go down paths that you really aren't sure lead anywhere.

So how does one get started?  Naturally, as with anything else, it begins with education and access.  In this case, you need to determine what you have available to you first.  If there's an existing system ready for you to use, you probably want to take advantage of that.  If you're a clean slate, you need to get something set up.  There are many services out there which provide version control, and it can even be free if you don't have a problem with other people possibly seeing your code.  It can also be free if you feel safe just running everything on your own computer or a server you control, although then you may need to install a service and keep it running.

Here are some of the more popular version control systems:

CVS - An old standby, it still works but frankly it's lacking a bit in features more modern systems have.
SVN - More modern and quite functional, I have worked (and continue to do so) in subversion shops for years.
Bitkeeper - This was paid software for years and I've never actually used it myself .  It basically came about as an answer to the difficulties Linus Torvalds was having with Linux development.
Git - A slightly more convoluted system than some, but clearly very powerful.  This ALSO came about due to Linux development, and apparently due to issues Linus was having with Bitkeeper.

There are others, but these are probably the main ones most of you will be looking at.

I personally use Git (hosted on a service) for code I share on this blog and for my own experimental work. It doesn't cost me anything, and it's nicely integrated with my IntelliJ IDE.   It also supports something called a 'gist', which is (as far as I know) a unique way to share a subset of a project in order to request assistance or provide examples.

The basic idea behind version control is the same, no matter what system you use.  Your make changes to software on your own computer and make sure that things work the way you want.  When you're happy with the code, you check it in to your version control repository.  If you are unhappy with the code, or have broken something to the point where fixing it is a major burden, you can just pull the last working copy back down and you're back to a known good starting point.

If multiple programmers are working on a project, things are much the same, except that you will pull the last working copy down a bit more often as you are getting all the changes that others have checked in as well.  Things are quite simple as long as two developers aren't working on the same exact files.  If they are working on the same files, some manual intervention is likely going to be needed to ensure that changes don't conflict.  That last process is called 'merging'.


Merging is a source of difficulty, or it can be, depending on your development practices.  I prefer to keep commit changes small and isolated whenever possible.  This keeps the differences (deltas) down to manageable levels, and if I've added two new source files rather than modified an existing one, we're not going to run into any problems.

Avoiding the Deadly Quadrant

I saw a video (opens in a new window) recently which illustrated an important concept in a very elegant way. 



This is important, because in 3/4 of this diagram, your code is inherently safe to run in a multi-threaded environment.  There are no synchronization blocks required, there is no need for complicated gatekeeping.  And yet, somehow a lot of code winds up with mutable data and synchronization headaches.

Applying just a few functional programming principles to your work can go a long way.  Parameters should generally be considered inviolate, use return properly and don't try modifying your inputs directly.  Prefer constants to variables. 

When you kick off a process, you really don't want it randomly reaching out and modifying some kind of global state.  If it REALLY needs to send messages home, give it a tool to do so, such as a callback function it can use for that purpose.

New Directions

 I've been learning lately.

I mean, I have been learning a lot.  Some of it is completely new, some of it is just fresh perspective on old ideas.

This is undoubtedly the natural consequence of being put in charge of developing a green-field project (can you believe it?!?) using both familiar and unfamiliar technologies.  It's a mobile app.  It's an API.  It's a cloud native, event driven...  work in progress.  As a consequence, I run into unexpected things all the time.  I also get to see up close what works and doesn't, as my client is flexible enough to let us experiment with features.

I think I need to record this stuff for anyone who might be interested.  I'm not saying that I will be giving up on discussions of OOP principles, but I will also be expanding my reach.

There was no single trigger for this, but the past year or so has helped me to understand a few new tools and concepts.  I'm still working out others, as I have been all along.  But now I'm going to write it down here.

I hope it proves useful.


JD

Saturday, February 15, 2020

Mary had a little lambda

Java 8 brought a lot of changes to the imperative/object oriented world of corporate software development.  So many changes that to this day the more fluent and functional style of code this enables is still not as prevalent as one might hope.

Today I'm going to discuss one aspect of these changes, lambda expressions.

What is a lambda?  It's a function without a name, and with an initially odd looking syntax.

When you see something that looks like this:
.map(t->t.getModels())

You may get a bit confused.  I know I did, and it took me a little while to comprehend this structure.

But really, it's just creating a method call with slightly different syntax.  When written with a bit of the fluff left in it may start to look rather familiar:

map((t)->t.getModels())

Or
map((t)->{ return t.getModels(); })

These all do the same thing.  This is a method, one designed to do only one thing, return the models contained within the parameter object passed to it.  The big difference is that there's no name for the method and we've added a little '->' to indicate that it's not to be executed now but at need.  That is what makes it a lambda, mostly.  It is a method or function, but one which can be passed around to whoever might be interested in it, but whether or not it runs depends on if the program needs results from it.  

Note that you would probably never write this as a stand-alone method.  You already have a one-liner in whatever 't' represents.  

And that "whatever 't' represents" is one of the issues that can make it a little harder than it has to be to understand.  I don't love one letter variable names, so maybe it would be more straightforward to write it like this:

.map(manufacturer->manufacturer.getModels())

I haven't seen that done very often, but I'm going to go this way for my own work because it expresses the intention with perfect clarity.  Sure, in IntelliJ I get hints along the right side of the screen that inform me about types, but what about a code snippet from GitLab or something?

There is one additional thing you absolutely must understand about lambda expressions:  Everything you hand to one has to remain unchanged until the thing actually executes.  If you try to put a mutable variable in there you're gonna have a bad time.  Only the parameter(s) can be different from call to call.  

Finally, there can be a temptation (especially given the syntax of that third example) to stick a few more lines of code inside the brackets.  Don't do that.  If it is a unit or work, it's worth naming it and making it available to others.  Extract a method and call that instead.  I almost wish that we couldn't write things that way, because programmers tend to be...  Oh, let's say expedient about getting things done.  I completely understand that, but please don't do this:

.map(manufacturer-> {
 if (manufacturer.getName.equals("blahblah") {
   doSomething();
 } else {
   doSomethingElse();
 }
 logger.info("Hey, look, I'm screwing up lambdas for all!");
 return manufacturer.stream()
            .collect(asList(manufacturer.getModels()));
 }

Just don't.  If it deserves curly braces, it deserves a name.