Monday, September 13, 2021

City of code

To developers, the most conspicuous difference between Web-based and desktop software is that a Web-based application is not a single piece of code. It will be a collection of programs of different types rather than a single big binary. And so designing Web-based software is like desiging a city rather than a building: as well as buildings you need roads, street signs, utilities, police and fire departments, and plans for both growth and various kinds of disasters.

Paul Graham, in The Other Road Ahead.

Saturday, August 28, 2021

Refactoring toward deeper insight II

Refactoring toward deeper insight is a multifaceted process. It will be helpful to stop for a moment to pull together the major points.

Initiation

Refactoring toward deeper insight can begin in many ways. It may be a response to a problem in the code — some complexity or awkwardness. Rather than apply a standard transformation of the code, the developers sense that the root of the problem is in the domain model. Perhaps a concept is missing. Maybe some relationship is wrong.

In a departure from the conventional view of refactoring, this same realization could come when the code looks tidy, if the language of the model seems disconnected from the domain experts, or if new requirements are not fitting in naturally. Refactoring might result from learning, as a developer who has gained deeper understanding sees an opportunity for a more lucid or useful model.

Seeing the trouble spot is often the hardest and most uncertain part. After that, developers can systematically seek out the elements of a new model. They can brainstorm with colleagues and domain experts. They can draw on systematized knowledge written as analysis patterns or design patterns.

Exploration teams

Whatever the source of dissatisfaction, the next step is to seek a refinement that will make the model communicate clearly and naturally. This might require only some modest change that is immediately evident and can be accomplished in a few hours. In that case, the change resembles traditional refactoring. But the search for a new model may well call for more time and the involvement of more people.

There are a few keys to keeping this process productive.

  • Self-determination. A small team can be assembled on the fly to explore a design problem. The team can operate for a few days and then disband. There is no need for long-term, elaborate organizational structures.
  • Scope and sleep. Two or three short meetings spaced out over a few days should produce a design worth trying. Dragging it out doesn't help. If you get stuck, you may be taking on too much at once. Pick a smaller aspect of the design and focus on that.
  • Exercising the UBIQUITOUS LANGUAGE. Involving the other team members — particularly the subject matter expert — in the brain-storming session creates an opportunity to exercise and refine the UBIQUITOUS LANGUAGE . The end result of the effort is a refinement of that LANGUAGE which the original developer(s) will take back and formalize in code.

Earlier chapters in this book have presented several dialogs in which developers and domain experts probe for better models. A full-blown brainstorming session is dynamic, unstructured, and incredibly productive.

Prior art

You can get ideas from books and other sources of knowledge about the domain itself. Although the people in the field may not have created a model suitable for running software, they may well have organized the concepts and found some useful abstractions. Feeding the knowledge-crunching process this way leads to richer, quicker results that also will probably seem more familiar to domain experts.

A design for developers

Software isn't just for users. It's also for developers. Developers have to integrate code with other parts of the system. In an iterative process, developers change the code again and again.

A supple design helps limit mental overload, primarily by reducing dependencies and side effects. It is based on a deep model of the domain that is fine-grained only where most critical to the users. This makes for flexibility where change is most common, and simplicity elsewhere.

Timing

If you wait until you can make a complete justification for a change, you've waited too long. Your project is already incurring heavy costs, and the postponed changes will be harder to make because the target code will have been more elaborated and more embedded in other code.

Continuous refactoring has come to be considered a "best practice," but most project teams are still too cautious about it. They see the risk of changing code and the cost of developer time to make a change; but what's harder to see is the risk of keeping an awkward design and the cost of working around that design. Developers who want to refactor are often asked to justify the decision. Although this seems reasonable, it makes an already difficult thing impossibly difficult, and tends to squelch refactoring (or drive it underground). Software development is not such a predictable process that the benefits of a change or the costs of not making a change can be accurately calculated.

Refactoring toward deeper insight needs to become part of the ongoing exploration of the subject matter of the domain, the education of the developers, and the meeting of the minds of developers and domain experts.

Therefore, refactor when

  • The design does not express the team's current understanding of the domain;
  • Important concepts are implicit in the design (and you see a way to make them explicit); or
  • You see an opportunity to make some important part of the design suppler.

This aggressive attitude does not justify any change at any time. Don't refactor the day before a release. Don't introduce "supple designs" that are just demonstrations of technical virtuosity but fail to cut to the core of the domain. Don't introduce a "deeper model" that you couldn't convince a domain expert to use, no matter how elegant it seems. Don't be absolute about things, but push beyond the comfort zone in the direction of favoring refactoring.

Crisis as opportunity

For over a century after Charles Darwin introduced it, the standard model of evolution was that species changed gradually, somewhat steadily, over time. Suddenly, in the 1970s, this model was displaced by the "punctuated equilibrium" model. In this expanded view of evolution, long periods of gradual change or stability are interrupted by relatively short bursts of rapid change. Then things settle down into a new equilibrium. Software development has an intentional direction behind it that evolution lacks (although it may not be evident on some projects), but nonetheless it follows this kind of rhythm.

Classical descriptions of refactoring sound very steady. Refactoring toward deeper insight usually isn't. A period of steady refinement of a model can suddenly bring you to an insight that shakes up everything. These breakthroughs don't happen every day, yet a large proportion of the changes that lead to a deep model and supple design emerge from them.

Such a situation often does not look like an opportunity; it seems more like a crisis. Suddenly there is some obvious inadequacy in the model. There is a gaping hole in what it can express, or some critical area where it is opaque. Maybe it makes statements that are just wrong.

This means the team has reached a new level of understanding. From their now-elevated viewpoint, the old model looks poor. From that viewpoint, they can conceive a far better one. Refactoring toward deeper insight is a continuing process. Implicit concepts are recognized and made explicit. Parts of the design are made suppler, perhaps taking on a declarative style.

Development suddenly comes to the brink of a breakthrough and plunges through to a deep model — and then steady refinement starts again.

Eric Evans, "Refactoring Toward Deeper Insight", in Domain-Driven Design: Tackling Complexity in the Heart of Software, 321-329.

DDD and patterns

Applying analysis patterns

Deep models and supple designs don't come easily. Progress comes from lots of learning about the domain, lots of talking, and lots of trial and error. Sometimes, though, we can get a leg up.

In Analysis Patterns: Reusable Object Models, Martin Fowler defined his patterns this way:

Analysis patterns are groups of concepts that represent a common construction in business modeling. It may be relevant to only one domain or it may span many domains.

The analysis patterns Fowler presents arose from experience in the field, and so they are practical, in the right situation. Such patterns provide someone facing a challenging domain with very valuable starting points for their iterative development process. The name emphasizes their conceptual nature. Analysis patterns are not technological solutions; they are guides to help you work out a model in a particular domain.

On a mature project, model choices are often informed by experience with the application. Multiple implementations of various components will have been tried. Some of these will have been carried into production and even will have faced the maintenance phase. Many problems can be avoided when such experience is available. Analysis patterns at their best can carry that kind of experience from other projects, combining model insights with extensive discussions of design directions and implementation consequences. To discuss model ideas out of that context makes them harder to apply and risks opening the deadly divide between analysis and design, which is antithetical to MODEL-DRIVEN DESIGN.

Analysis patterns are knowledge to draw on

When you are lucky enough to have an analysis pattern, it hardly ever is the answer to your particular needs. Yet it offers valuable leads in your investigation, and it provides cleanly abstracted vocabulary.

When you use a term from a well-known analysis pattern, take care to keep the basic concept it designates intact, however much the superficial form might change. There are two reasons for this. First, the pattern may embed understanding that will help you avoid problems. Second, and more important, your UBIQUITOUS LANGUAGE is enhanced when it includes terms that are widely understood or at least well explained. If your model definitions change through the natural evolution of the model, take the trouble to change the names too.

This kind of reapplication of organized knowledge is completely different from attempts to reuse code through frameworks or components, except that either could provide the seed of an idea that is not obvious. A model, even a generalized framework, is a complete working whole, while an analysis is a kit of model fragments. Analysis patterns focus on the most critical and difficult decisions and illuminate alternatives and choices. They anticipate downstream consequences that are expensive if you have to discover them for yourself.

Relating design patterns to the models

The patterns explored in this book so far are intended specifically for solving problems in a domain model in the context of a MODEL-DRIVEN DESIGN . Actually, though, most of the patterns published to date are more technical in focus. What is the difference between a design pattern and a domain pattern?

Some, not all, of the patterns in Design Patterns can be used as domain patterns. Doing so requires a shift in emphasis. Design Patterns presents a catalog of design elements that have solved problems commonly encountered in a variety of contexts. The motivations of these patterns and the patterns themselves are presented in purely technical terms. But a subset of these elements can be applied in the broader context of domain modeling and design, because they correspond to general concepts that emerge in many domains.

In addition to those in Design Patterns, there have been many other technical design patterns presented over the years. Some of them correspond to deep concepts that emerge in domains. It would be nice to draw on this work. To make use of such patterns in domain-driven design, we have to look at the patterns on two levels simultaneously. On one level, they are technical design patterns in the code. On the other level, they are conceptual patterns in the model.

I'm not going to try to compile a list of the design patterns that can be used as domain patterns. Although I can't think of an example of using an interpreter as a domain pattern, I'm not prepared to say that there is no conception of any domain that would fit. The only requirement is that the pattern should say something about the conceptual domain, not just be a technical solution to a technical problem.

Eric Evans, "Applying Analysis Patterns" and "Relating Design Patterns to the Model", in Domain-Driven Design: Tackling Complexity in the Heart of Software, 293-320.

Thursday, July 29, 2021

Supple design II

Declarative design

This term means many things to many people, but usually it indicates a way to write a program, or some part of a program, as a kind of executable specification. A very precise description of properties actually controls the software. In its various forms, this could be done through a reflection mechanism or at compile time through code generation (producing conventional code automatically, based on the declaration). This approach allows another developer to take the declaration at face value. It is an absolute guarantee.

Generating a running program from a declaration of model properties is a kind of Holy Grail of MODEL-DRIVEN DESIGN, but it does have its pitfalls in practice. For example, here are just two particular problems I've encountered more than once.

  • A declaration language not expressive enough to do everything needed, but a framework that makes it very difficult to extend the software beyond the automated portion
  • Code-generation techniques that cripple the iterative cycle by merging generated code into handwritten code in a way that makes regeneration very destructive

The unintended consequence of many attempts at declarative design is the dumbing-down of the model and application, as developers, trapped by the limitations of the framework, enact design triage in order to get something delivered.

Rule-based programming with an inference engine and a rule base is another promising approach to declarative design. Unfortunately, subtle issues can undermine this intention. Although a rules-based program is declarative in principle, most systems have "control predicates" that were added to allow performance tuning. This control code introduces side effects, so that the behavior is no longer dictated completely by the declared rules. Adding, removing, or reordering the rules can cause unexpected, incorrect results. Therefore, a logic programmer has to be careful to keep the effect of code obvious, just as an object programmer does.

Many declarative approaches can be corrupted if the developers bypass them intentionally or unintentionally. This is likely when the system is difficult to use or overly restrictive. Everyone has to follow the rules of the framework in order to get the benefits of a declarative program.

The greatest value I've seen delivered has been when a narrowly scoped framework automates a particularly tedious and error-prone aspect of the design, such as persistence and object-relational mapping. The best of these unburden developers of drudge work while leaving them complete freedom to design.

Domain-specific languages

An interesting approach that is sometimes declarative is the domain-specific language. In this style, client code is written in a programming language tailored to a particular model of a particular domain. For example, a language for shipping systems might include terms such as cargo and route, along with syntax for associating them. The program is then compiled, often into a conventional object-oriented language, where a library of classes provides implementations for the terms in the language.

In such a language, programs can be extremely expressive, and make the strongest connection with the UBIQUITOUS LANGUAGE. This is an exciting concept, but domain-specific languages also have their drawbacks in the approaches I've seen based on object-oriented technology.

To refine the model, a developer needs to be able to modify the language. This may involve modifying grammar declarations and other language-interpreting features, as well as modifying underlying class libraries. I'm all in favor of learning advanced technology and design concepts, but we have to soberly assess the skills of a particular team, as well as the likely skills of future maintenance teams. Also, there is value in the seamlessness of an application and a model implemented in the same language. Another drawback is that it can be difficult to refactor client code to conform to a revised model and its associated domain-specific language. Of course, someone may come up with a technical fix for the refactoring problems.

This technique might be most useful for very mature models, perhaps where client code is being written by a different team. Generally, such setups lead to the poisonous distinction between highly technical framework builders and technically unskilled application builders, but it doesn't have to be that way.

A declarative style of design

Once your design has INTENTION-REVEALING INTERFACES, SIDE-EFFECT-FREE FUNCTIONS, and ASSERTIONS, you are edging into declarative territory. Many of the benefits of declarative design are obtained once you have combinable elements that communicate their meaning, and have characterized or obvious effects, or no observable effects at all.

A supple design can make it possible for the client code to use a declarative style of design.

Angles of attack

This chapter has presented a raft of techniques to clarify the intent of code, to make the consequences of using it transparent, and to decouple model elements. Even so, this kind of design is difficult. You can't just look at an enormous system and say, "Let's make this supple." You have to choose targets. Here are a couple of broad approaches [...].

Carve off subdomains

You just can't tackle the whole design at once. Pick away at it. Some aspects of the system will suggest approaches to you, and they can be factored out and worked over. You may see a part of the model that can be viewed as specialized math; separate that. Your application enforces complex rules restricting state changes; pull this out into a separate model or simple framework that lets you declare the rules. With each such step, not only is the new module clean, but also the part left behind is smaller and clearer. Part of what's left is written in a declarative style, a declaration in terms of the special math or validation framework, or whatever form the subdomain takes.

It is more useful to make a big impact on one area, making a part of the design really supple, than to spread your efforts thin.

Draw on established formalisms, when you can

Creating a tight conceptual framework from scratch is something you can't do every day. Sometimes you discover and refine one of these over the course of the life of a project. But you can often use and adapt conceptual systems that are long established in your domain or others, some of which have been refined and distilled over centuries. Many business applications involve accounting, for example. Accounting defines a well-developed set of ENTITIES and rules that make for an easy adaptation to a deep model and a supple design.

There are many such formalized conceptual frameworks, but my personal favorite is math. It is surprising how useful it can be to pull out some twist on basic arithmetic. Many domains include math somewhere. Look for it. Dig it out. Specialized math is clean, combinable by clear rules, and people find it easy to understand.

Eric Evans, "Supple Design", in Domain-Driven Design: Tackling Complexity in the Heart of Software, 270-283.

Sunday, July 25, 2021

Supple design

The ultimate purpose of software is to serve users. But first, that same software has to serve developers. This is specially true in a process that emphasizes refactoring. As a program evolves, developers will rearrange and rewrite every part. They will integrate the domain objects into the application and with new domain objects. Even years later, maintenance programmers will be changing and extending the code. People have to work with this stuff. But will they want to?

To have a project accelerate as development proceeds — rather than get weighed down by its own legacy — demands a design that is a pleasure to work with, inviting to change. A supple design.

A lot of overengineering has been justified in the name of flexibility. But more often than not, excessive layers of abstraction and indirection get in the way. Look at the design of software that really empowers the people who handle it; you will usually see something simple. Simple is not easy. To create elements that can be assembled into elaborate systems and still be understandable, a dedication to MODEL-DRIVEN DESIGN has to be joined with a moderately rigorous design style. It may well require relatively sophisticated design skill to create or to use.

Early versions of a design are usually stiff. Many never acquire any suppleness in the time frame or budget of the project. I've never seen a large program that had this quality throughout. But when complexity is holding back progress, honing the most crucial, intricate parts to a supple design makes the difference between getting sucked down into legacy maintenance and punching through the complexity ceiling.

INTENTION-REVEALING INTERFACES

We are always fighting cognitive overload: If the client developer's mind is flooded with detail about how a component does its job, his mind isn't clear to work out the intricacies of the client design. This is true even when the same person is playing both roles, developing and using his own code, because even if he doesn't have to learn those details, there is a limit to how many factors he can consider at once.

Therefore:

Name classes and operations to describe their effect and purpose, without reference to the means by which they do what they promise. This relieves the client developer of the need to understand the internals. These names should conform to the UBIQUITOUS LANGUAGE so that team members can quickly infer their meaning. Write a test for a behavior before creating it, to force your thinking into client developer mode.

All the tricky mechanism should be encapsulated behind abstract interfaces that speak in terms of intentions, rather than means.

In the public interfaces of the domain, state relationships and rules, but not how they are enforced; describe events and actions, but not how they are carried out; formulate the equation but not the numerical method to solve it. Pose the question, but don't present the means by which the answer shall be found.

SIDE-EFFECT FREE FUNCTIONS

Operations can be broadly divided into two categories, commands and queries. Queries obtain information from the system, possibly by simply accessing data in a variable, possibly performing a calculation based on that data. Commands (also known as modifiers) are operations that affect some change to the systems (for a simple example, by setting a variable). In standard English, the term side effect implies an unintended consequence, but in computer science, it means any effect on the state of the system.

Interactions of multiple rules or compositions of calculations become extremely difficult to predict. The developer calling an operation must understand its implementation and the implementation of all its delegations in order to anticipate the result. The usefulness of any abstraction of intefaces is limited if the developers are forced to pierce the veil. Without safely predictable abstractions, the developers must limit the combinatory explosion, placing a low ceiling on the richness of behavior that is feasible to build.

Operations that return results without producing side effects are called functions. A function can be called multiple times and return the same value each time. A function can call on other functions without worrying about the depth of nesting. Functions are much easier to test than operations that have side effects. For these reasons, functions lower risk.

Obviously, you can't avoid commands in most software systems, but the problem can be mitigated in two ways. First, you can keep the commands and queries strictly segregated in different operations. Ensure that the methods that cause changes do not return domain data and are kept as simple as possible. Perform all queries and calculations in methods that cause no observable side effects.

Second, there are often alternative models and designs that do not call for an existing object to be modified at all. Instead, a new VALUE OBJECT, representing the result of the computation, is created and returned. A VALUE OBJECT can be created in answer to a query, handed off, and forgotten — unlike an ENTITY, whose life cycle is carefully regulated.

Therefore:

Place as much of the logic of the program as possible into functions, operations that return results with no observable side effects. Strictly segregate commands (methods that result in modifications to observable state) into very simple operations that do not return domain information. Further control side effects by moving complex logic into VALUE OBJECTS when a concept fitting the responsibility presents itself.

SIDE-EFFECT FREE FUNCTIONS, specially in immutable VALUE OBJECTS, allow safe combination of operations. When a FUNCTION is presented through an INTENTION-REVEALING INTERFACE, a developer can use it without understanding the detail of its implementation.

ASSERTIONS

When the side effects of operations are only defined implicitly by their implementation, designs with a lot of delegation become a tangle of cause and effect. The only way to understand a program is to trace execution through branching paths. The value of encapsulation is lost. The necessity of tracing concrete execution defeats abstration.

We need a way of understanding the meaning of a design element and the consequences of executing an operation without delving into its internals. INTENTION-REVEALING INTERFACES carry us part of the way there, but informal suggestions of intentions are not always enough. The "design by contract" school goes the next step, making "assertions" about classes and methods that the developer guarantees will be true. Briefly, "post-conditions" describe the side-effects of an operation, the guaranteed outcome of calling a method. "Preconditions" are like the fine print on the contract, the conditions that must be satisfied in order for the post-condition guarantee to hold. Class invariants make assertions about the state of an object at the end of any operation. Invariants can also be declared for entire AGGREGATES, rigorously defining integrity rules.

All these assertions describe state, not procedures, so they are easier to analyze. Class invariants help characterize the meaning of a class, and simplify the client developer's job by making the objects more predictable. If you trust the guarantee of a post-condition, you don't have to worry about how a method works. The effect of delegations should already be incorporated into the assertions.

Even though many object-oriented languages don't currently support ASSERTIONS directly, ASSERTIONS are still a powerful way of thinking about a design. Automated unit tests can partially compensate for the lack of language support. Because ASSERTIONS are all in terms of states, rather than procedures, they make tests easy to write. The test setup puts the preconditions in place; then, after execution, the test checks to see if the post-conditions hold.

Clearly stated invariants and pre- and post-conditions allow a developer to understand the consequences of using an operation or object. Theoretically, any noncontradictory set of assertions would work. But humans don't just compile predicates in their heads. They will be extrapolating and interpolating the concepts of the model, so it is important to find models that make sense to people as well as satisfying the needs of the application.

CONCEPTUAL CONTOURS

Sometimes people chop functionality fine to allow flexible combination. Sometimes they lump it large to encapsulate complexity. Sometimes they seek a consistent granularity, making all classes and operations to a similar scale. These are oversimplifications that don't work well as general rules. But they are motivated by a basic set of problems.

When elements of a model or design are embedded in a monolithic construct, their functionality gets duplicated. The external interface doesn't say everything a client might care about. Their meaning is hard to understand, because different concepts are mixed together.

On the other hand, breaking down classes and methods can pointlessly complicate the client, forcing client objects to understand how tiny pieces fit together. Worse, a concept can be lost completely. Half of a uranium atom is not uranium. And of course, it isn't just grain size that counts, but just where the grain runs.

The twin fundamentals of high cohesion and low coupling play a role in design at all scales, from individual methods up through classes and MODULES to large-scale structures. These two principles apply to concepts as much as to code. To avoid slipping into a mechanistic view of them, temper your technical thinking by frequently touching base with your intuition for the domain. With each decision, ask yourself, "Is this an expedient based on a particular set of relationships in the current model and code, or does it echo some contour of the underlying domain?"

Find the conceptually meaningful unit of functionality, and the resulting design will be both flexible and understandable. For example, if an "addition" of two objects has a coherent meaning in the domain, then implement methods at that level. Don't break the add() into two steps. Don't proceed to the next step within the same operation. On a slightly larger scale, each object should be a single complete concept, a "WHOLE VALUE."

By the same token, there are areas in any domain where detail isn't interesting to the kind of people the software serves. Clumping things that don't need to be dissected or rearranged avoids clutter and makes it easier to see the elements that really are meant to recombine.

Therefore:

Decompose design elements (operations, interfaces, classes, and AGGREGATES) into cohesive units, taking into consideration your intuition of the important divisions in the domain. Observe the axes of change and stability through successive refactorings and look for the underlying CONCEPTUAL CONTOURS that explain these shearing patterns. Align the model with the consistent aspects of the domain that make it a viable area of knowledge in the irst place.

The goal is a simple set of interfaces that combine logically to make sensible statements in the UBIQUITOUS LANGUAGE , and without the distraction and maintenance burden of irrelevant options. This is typically an outcome f refactoring: it's hard to produce up front. But it may never emerge from technically oriented refactoring; it emerges from refactoring toward deeper insight.

Even when the design follows CONCEPTUAL CONTOURS , there will need to be modifications and refactoring. When successive refactoring tends to be localized, not shaking multiple broad concepts of the model, it is an indicator f model fit. Encountering a requirement that forces extensive changes in the breakdown of the objects and methods is a message: Our understanding of the domain needs refinement. It presents an opportunity to deepen the model and make the design more supple.

STANDALONE CLASSES

Interdependencies make models and designs hard to understand. They also make them hard to test and maintain. And interdependencies pile up easily.

Both MODULES and AGGREGATES are aimed at limiting the web of interdependencies. When a highly cohesive subdomain is carved out into a MODULE, a set of objects are decoupled from the rest of the system, so there are a finite number of interrelated concepts. But even a MODULE can be a lot to think about without an almost fanatical commitment to controlling dependencies within it.

Even within a MODULE, the difficulty of interpreting a design increases wildly as dependencies are added. This adds to mental overload, limiting the design complexity a developer can handle. Implicit concepts contribute to this load even more than explicit references.

Refined models are distilled until every remaining connection between concepts represents something fundamental to the meaning of those concepts. In an important subset, the number of dependencies can be reduced to zero, resulting in a class that can be fully understood all by itself, along with a few primitives and basic library concepts.

Implicit concepts, recognized or unrecognized, count just as much as explicit references. Although we can generally ignore dependencies on primitive values such as integers and strings, we can't ignore what they represent.

Low coupling is fundamental to object design. When you can, go all the way. Eliminate all other concepts from the picture. Then the class will be completely self-contained and can be studied and understood alone. Every such self-contained class significantly eases the burden of understanding a MODULE.

Dependencies on other classes within the same module are less harmful than those outside. Likewise, when two objects are naturally tightly coupled, multiple operations involving the same pair can actually clarify the nature of the relationship. The goal is not to eliminate all dependencies, but to eliminate all nonessential ones. If every dependency can't be eliminated, each one that is removed frees the developer to concentrate on the remaining conceptual dependencies.

Try to factor the most intricate computations into STANDALONE CLASSES, perhaps by modeling VALUE OBJECTS held by the more connected classes.

CLOSURE OF OPERATIONS

Of course, there will be dependencies, and that isn't a bad thing when the dependency is fundamental to the concept. Stripping interfaces down to deal with nothing but primitives can impoverish them. But a lot of unnecessary dependencies, and even entire concepts, get introduced at interfaces.

Most interesting objects end up doing things that can't be characterized by primitives alone.

Another common practice in refined designs is what I call "CLOSURE OF OPERATIONS." The name comes from that most refined of conceptual systems, mathematics. 1 + 1 = 2. The addition operation is closed under the set of real numbers. Mathematicians are fanatical about not introducing extraneous concepts, and the property of closure provides them a way of defining an operation without involving any other concepts.

Therefore:

Where it fits, define an operation whose return type is the same as the type of its argument(s). If the implementer has state that is used in the computation, then the implementer is effectively an argument of the operation, so the argument(s) and return value should be of the same type as the implementer. Such an operation is closed under the set of instances of that type. A closed operation provides a high-level interface without introducing any dependency on other concepts.

This pattern is most often applied to the operations of a VALUE OBJECT. Because the life cycle of an ENTITY has significance in the domain, you can't just conjure up a new one to answer a question. There are operations that are closed under an ENTITY type. You could ask an Employee object for its supervisor and get back another Employee. But in general, ENTITIES are not the sort of concepts that are likely to be the result of a computation. So, for the most part, this is an opportunity to look for in the VALUE OBJECTS.

An operation can be closed under an abstract type, in which case specific arguments can be of different concrete classes. After all, addition is closed under real numbers, which can be either rational or irrational.

As you're experimenting, looking for ways to reduce interdependence and increase cohesion, you sometimes get halfway to this pattern. The argument matches the implementer, but the return type is different, or the return type matches the receiver and the argument is different. These operations are not closed, but they do give some of the advantages of CLOSURE. When the extra type is a primitive or basic library class, it frees the mind almost as much as CLOSURE.


Making software obvious, predictable, and communicative makes abstraction and encapsulation effective. Models can be factored so that objects are simple to use and understand yet still have rich, high-level interfaces.

These techniques require fairly advanced design skills to apply and sometimes even to write a client. The usefulness of a MODEL-DRIVEN DESIGN is sensitive to the quality of the detailed design and implementation decisions, and it only takes a few confused developers to derail a project from the goal.

That said, for the team willing to cultivate its modeling and design skills, these patterns and the way of thinking they reflect yield software that developers can work and rework to create complex software.

Eric Evans, "Supple Design", in Domain-Driven Design: Tackling Complexity in the Heart of Software, 243-270.

Sunday, April 25, 2021

List of posts

I'm always reading posts, articles, and papers from various sources on the Internet. In the last 4 years I have developed the habit of collecting the texts that are most relevant to me, and now that I've reached the number of 100 posts collected I think it's a good time to share them with you. I have divided them into some broad categories. The categories are organized in alphabetical order, although the posts themselves do not follow a specific order. Most of the texts are in English and some of them are in Portuguese. Enjoy!

APIs

  1. Best Practices for Designing a Pragmatic RESTful API
  2. API Gateways are going through an identity crisis
  3. API Versioning Has No "Right Way"

Architecture

  1. Unifying Mobile Onboarding Experiences at Uber
  2. What do you mean by "Event-Driven"?
  3. Focusing on Events
  4. 11 erros comuns em arquiteturas orientadas a eventos e como evitá-los
  5. Event Collaboration
  6. ParallelChange
  7. The evergreen cache
  8. The LMAX Architecture
  9. A primer on functional architecture
  10. A Brief History of Scaling LinkedIn
  11. The Architect Elevator — Visiting the upper floors
  12. BLIKI: Arquitetura de Software — EximiaCo.Tech

Blockchain

  1. WTF is The Blockchain?
  2. Who owns the Blockchain?

Carreer and personal development

  1. Programmer Competency Matrix
  2. LimitationsOfGeneralAdvice
  3. SoftwareDevelopmentAttitude
  4. Your obsession with learning might be holding you back
  5. How to get experience as a software engineer
  6. On Being A Senior Engineer
  7. What does sponsorship look like?
  8. A forty-year career.
  9. Pendulum swings
  10. How to Disagree

Development tools

  1. How (and Why) to Log Your Entire Bash History
  2. Who Needs Git When You Got ZFS?

Documentation

  1. Como escrever boas documentações
  2. Etsy's experiment with immutable documentation
  3. The Golden Rules of Code Documentation
  4. Our team broke up with instant-legacy releases and you can too
  5. Undervalued Software Engineering Skills: Writing Well

Incident management

  1. Introducing Dispatch
  2. Blameless PostMortems and a Just Culture
  3. Three months, 30x demand: How we scaled Google Meet during COVID-19

Microservices and distributed systems

  1. Best Practices for Building a Microservice Architecture
  2. Standing on Distributed Shoulders of Giants
  3. The rise of non-microservices architectures
  4. Thinking about Microservices: The Fiefdom and the Emissaries
  5. Where is my cache? Architectural patterns for caching microservices by example
  6. Patterns for Microservices — Sync vs. Async
  7. The Hardest Part About Microservices: Your Data
  8. The Hardest Part of Microservices: Calling Your Services
  9. Introducing Domain-Oriented Microservice Architecture
  10. Seven Hard-Earned Lessons Learned Migrating a Monolith to Microservices
  11. Backend-in-the-frontend: a pattern for cleaner code
  12. The Log: What every software engineer should know about real-time data's unifying abstraction

People, teams, and processes

  1. The Agile Fluency Model
  2. Why Your Employees Are Losing Motivation
  3. Our obsession with performance data is killing performance
  4. Evidence Based Scheduling
  5. 3 Habits of a Highly Effective Growth Engineering Team
  6. The epistemology of software quality
  7. Building a Platform Team — Laying the Foundations
  8. Maximizing Developer Effectiveness
  9. front-of-the-front-end and back-of-the-front-end web development
  10. Embrace the Grind

Product

  1. Design for the Novice, Configure for the Pro
  2. Once You’ve Solved for the Novice, You Need to Handle the Pro
  3. When a rewrite isn’t: rebuilding Slack on the desktop

Programming languages, design, and implementation

  1. Revenge of the Nerds
  2. Things You Should Never Do, Part I
  3. Python Design Patterns: For Sleek And Fashionable Code
  4. The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
  5. Efficiently Exploiting Multiple Cores with Python
  6. Python 3 Q & A
  7. DRY is about Knowledge
  8. Domain-Driven Design is Linguistic
  9. GoF Design Patterns — Sobreviveu ao teste do tempo?
  10. Design patterns of 1972
  11. Ralph Johnson on design patterns
  12. "Design Patterns" Aren't
  13. Monopólio de linguagens: uma perspectiva além de tecnologia
  14. Programming as translation
  15. Short Identifier
  16. Crash-only software: More than meets the eye
  17. Ignore All Web Performance Benchmarks, Including This One
  18. Too DRY — The Grep Test
  19. The Right Way to Bundle Your Assets for Faster Sites over HTTP/2
  20. Collection Pipeline

Security

  1. So you want to secure something
  2. Up to 20% of your application dependencies may be unmaintained

Technical debts

  1. Chernobyl: The True Cost Of Technical Debt
  2. Three Tips for Managing Technical Debt: While Maintaining Developer Velocity (and Sanity)
  3. When Your Tech Debt Comes Due
  4. TechnicalDebtQuadrant

DevOps

  1. 5 Lessons Learned From Writing Over 300,000 Lines of Infrastructure Code
  2. Monitoring Check Smells
  3. The Calculus of Service Availability
  4. How to invest in technical infrastructure
  5. Why Google Stores Billions of Lines of Code in a Single Repository

Technology and history

  1. True Innovation
  2. The Secret History of Women in Coding
  3. The Yoda of Silicon Valley

Tests and quality

  1. Software Testing Anti-patterns
  2. Is High Quality Software Worth the Cost?
  3. The Rise of Test Impact Analysis
  4. I test in prod

Thursday, April 1, 2021

Lista de podcasts

Podcasts têm crescido bastante em audência no mundo todo e ganhado cada vez mais espaço no Brasil. Desde que descobri essa mídia, passei a integrá-la também à minha rotina, como mais uma fonte de conhecimento. É uma ótima maneira, por exemplo, de aproveitar o tempo gasto em tarefas mais mecânicas (como limpar a casa, lavar a louça, dirigir ou caminhar) para consumir novos conteúdos e aprender um pouco mais. Eventualmente escuto episódios isolados de certos podcasts, seja por indicação de alguém que conheço ou por esbarrar com algo na Internet que me chama a atenção. No entanto, existem alguns podcasts que acompanho com regularidade, e são esses que compartilho na lista a seguir:

  • Hipsters Ponto Tech: é o podcast de tecnologia que acompanho há mais tempo (desde que surgiu, em 2016). Com temática diversa, hosts muito simpáticos e ótimos convidados, o Hipsters consegue atingir um bom equilíbrio entre informação e entretenimento. Os episódios do Hipsters On The Road, spin-off do podcast, também são muito bons.
  • DEVNAESTRADA: embora possua alguns episódios mais técnicos sobre tecnologias específicas, o forte do DEVNAESTRADA é o foco no cotidiano de quem trabalha com desenvolvimento de software, falando tanto de aspectos profissionais (como carreira, processos e ferramentas) quanto de aspectos pessoais (como saúde, hábitos e motivações). Nos episódios de entrevistas é possível escutar histórias de vida inspiradoras de pessoas que trabalham com tecnologia dentro e fora do Brasil.
  • Lambda3 Podcast: com um feed variado, o podcast da Lambda3 também consegue atingir um bom equilíbrio entre densidade de informação e uma apresentação leve e divertida. Os podcasts técnicos são bem embasados, representando hoje uma das grandes referências nacionais em tecnologia.
  • Naruhodo: um dos podcasts que mais gosto de ouvir. Naruhodo alia curiosidades do dia-a-dia com ciência, apresentando os assuntos com muito bom humor sem abrir mão da corretude científica. Não é um podcast de tecnologia, mas muitas das informações podem ser aplicadas nesse contexto também.
  • Software Engineering Daily: o único da lista em inglês e provavelmente o mais técnico de todos. Jeff Meyerson entrevista profissionais de alto nível da indústria de software a fim de destrinchar tecnologias e abordagens específicas.