Friday, October 20, 2017

Identifiers are for look-up. Names are for search.

We know from the principles of the world-wide web that every URL identifies a specific entity. It's fairly apparent that "https://ebank.com/accounts/a49a9762-3790-4b4f-adbf-4577a35b1df7" is the URL of a specific bank account. Whenever I use this URL, now or in the future, it will always refer to the same bank account. You might be tempted to think that 'https://library.com/shelves/american-literature/books/moby-dick' is the URL of a specific book. If you think renaming and relocating books could never make sense in a library API, even hypothetically, then you can perhaps defend that point of view, but otherwise you have to think of this URL differently. When I use this URL today, it refers to a specific, dog-eared copy of Moby Dick that is currently on the American Literature shelf. Tomorrow, if the book or shelf is moved or renamed, it may refer to a shiny new copy that replaced the old one, or to no book at all. This shows that the second URL isn’t the URL of a specific book — it must be the URL of something else. You should think of it as the URL of a search result. Specifically, the result of this search:

find the book that is [currently] named "moby-dick", and is [currently] on the shelf that is [currently] named "american-literature"

Here’s another URL for the same search result, where the difference is entirely one of URL style, not meaning:

https://library.com/search?kind=book&name=moby-dick&shelf=(name=american-literature)

Understanding that URLs based on hierarchical names are actually the URLs of search results rather than the URLs of the entities in those search results is a key idea that helps explain the difference between naming and identity.

Martin Nally, in API design: Choosing between names and identifiers in URLs.

Monday, October 2, 2017

The use of diagrams and flow charts

The flow chart is a most thoroughly oversold piece of program documentation. Many programs don't need flow charts at all; few programs need more than a one-page flow chart.

Flow charts show the decision structure of a program, which is only one aspect of its structure. They show decision structure rather elegantly when the flow chart is on one page, but the overview breaks down badly when one has multiple pages, sewed together with numbered exits and connectors.

The one-page flow chart for a substantial program becomes essentially a diagram of program structure, and of phases or steps. As such it is very handy.

Frederick P. Brooks, Jr., "The Other Face", in The Mythical Man-Month (Anniversary Ed.), 167-168.

Had I not been writing a book, I would have draw these diagrams by hand on a scrap of paper or a whiteboard. I would not have taken the time to use a drawing tool. There are no circunstances that I know of where using a drawing tool is faster than a napkin.

Having used the diagrams to help me evolve the code, I would not have kept the diagrams. In every case, the ones I drew for myself were intermediate steps.

Is there value in keeping diagrams at this level of detail? Clearly, if you are trying to expose your reasoning, as I am doing in this book, they come in pretty handy. But usually we are not trying to document the evolutionary path of a few hours of coding. Usually, these diagrams are transient and are better thrown away. At this level of detail, the code is generally good enough to act as its own documentation. At higher levels, that is not always true.

Robert C. Martin, "Observer - Backing into a Pattern", in Agile Software
Development: Principles, Patterns, and Practices
, 315.

Diagrams are a means of communication and explanation, and they facilitate brainstorming. They serve these ends best if they are minimal. Comprehensive diagrams of the entire object model fail to communicate or explain; they overwhelm the reader with detail and they lack meaning. This leads us away from the all-encompassing object model diagram, or even the all-encompassing database repository of UML. It leads us toward simplified diagrams of conceptually important parts of the object model that are essential to understanding the design.

The vital detail about the design is captured in the code. A well-written implementation should be transparent, revealing the model underlying it. Supplemental diagrams and documents can guide people's attention to the central points. Natural language discussion can fill in the nuances of meaning. This is why I prefer to turn things inside out from the way a typical UML diagram handles them. Rather than a diagram annotated with text, I write a text document illustrated with selective and simplified diagrams.

Always remember that the model is not the diagram.

Eric Evans, "Communication and the Use of Language", in Domain-Driven Design: Tackling Complexity in the Heart of Software, 36-37.

Thursday, September 14, 2017

Who owns the interface?

In the early 1990s, we used to think that the physical bond ruled. There were very reputable books that recommended that inheritance hierarchies be placed together in the same physical package. This seemed to make sense because inheritance is such a strong physical bond. But over the last decade, we have learned that the physical strength of inheritance is misleading and that inheritance hierarchies should usually not be packaged together. Rather, clients tend to be packaged with the interfaces they control.

This misalignment of the strength of logical and physical bond is an artifact of statically typed languages like C++ and Java. Dynamically typed languages, like Smalltalk, Python, and Ruby, don't have the misalignment because they don't use inheritance to achieve polymorphic behavior.

Robert C. Martin, "Abstract Server, Adapter, and Bridge", in Agile Software
Development: Principles, Patterns, and Practices
, 319.

Sunday, August 6, 2017

Aprendendo a aprender

Recentemente, ouvi a entrevista do Fabio Akita no DEVNAESTRADA e fiquei bastante admirado com a sua história, seu conhecimento e sua visão. Em particular, achei especialmente pertinentes os comentários dele sobre a necessidade de aprendermos a aprender (isto é, de cultivarmos uma mentalidade de estudo, crescimento e evolução constantes) e sobre o papel do verdadeiro engenheiro, que é o de resolver problemas, não o de se tornar um defensor de ferramentas (algo na mesma linha do que comentei sobre trade-offs). Vale a pena escutá-lo relatando sua trajetória e compreender como ele aplica essas filosofias ao seu método de trabalho.

Wednesday, August 2, 2017

Top-down design

The issues we have discussed so far lead to an inescapable conclusion. The package structure cannot be designed from the top down. This means that it is not one of the first things about the system that is designed. Indeed, it seems that it evolves as the system grows and changes.

You may find this to be counterintuitive. We have come to expect that large-grained decompositions, like packages, are also high-level functional decompositions. When we see a large-grained grouping like a package dependency structure, we feel that the packages ought to somehow represent the functions of the system. Yet this do not seem to be an attribute of package dependency diagrams.

In fact, package dependency diagrams have very little to do with describing the function of the application. Instead, they are a map to the buildability of the application. This is why they aren't designed at the start of the project. There is no software to build, and so there is no need for a build map. But as more and more classes accumulate in the early stages of the implementation and design, there is a growing need to manage the dependencies so that the project can be developed without the morning-after syndrome. Moreover, we want to keep changes as localized as possible, so we start paying attention to the SRP and CCP and collocate classes that are likely to change together.

As the application continues to grow, we start becoming concerned about creating reusable elements. Thus, the CRP begins to dictate the composition of packages. Finally, as cycles appear, the ADP is applied and the package dependency graph jitters and grows.

If we were try to design the package dependency structure before we had designed any classes, we would likely fail rather badly. We would not know much about common closure, we would be unaware of any reusable elements, and we would almost certainly create packages that produce dependency cycles. Thus, the package dependency structure grows and evolves with the logical design of the system.

Robert C. Martin, "Principles of Package Design", in Agile Software
Development: Principles, Patterns, and Practices
, 260-261.

Monday, July 31, 2017

Principles of package design

The Reuse-Release Equivalence Principle (REP)
The granule of reuse is the granule of release.

The Common-Reuse Principle (CRP)
The classes in a package are reused together. If you reuse one of the classes in a package, you reuse them all.

The Common-Closure Principle (CCP)
The classes in a package should be closed together against the same kinds of changes. A change that affects a package affects all the classes in that package and no other packages.

The Acyclic-Dependencies Principle (ADP)
Allow no cycles in the package-dependency graph.

The Stable-Dependencies Principle (SDP)
Depend in the direction of stability.

The Stable-Abstractions Principle (SAP)
A package should be as abstract as it is stable.

Robert C. Martin, "Principles of Package Design", in Agile Software Development: Principles, Patterns, and Practices, 253-268.

See also the pages about these principles in the WikiWikiWeb.

Sunday, July 23, 2017

Higlights on Spotify's squads model

The PO is the "entrepreneur" or "product champion", focusing on delivering a great product, while the chapter lead is the "professor" or "competency leader", focusing on technical excellence.

There is a healthy tension between these roles, as the entrepreneur tends to want to speed up and cut corners, while the professor tends to want to slow down and build things properly. Both aspects are needed, that's why it is a "healthy" tension.

[...]

Technically, anyone is allowed to edit any system. Since the squads are effectively feature teams, they normally need to update multiple systems to get a new feature into production.

The risk with this model is that the architecture of a system gets messed up if nobody focuses on the integrity of the system as a whole.

To mitigate this risk, we have a role called "System Owner". All systems have a system owner, or a pair of system owners (we encourage pairing). For operationally critical systems, the System Owner is a Dev-Ops pair – that is, one person with a developer perspective and one person with an operations perspective.

The system owner is the "go to" person(s) for any technical or architectural issues related to that system. He is a coordinator and guides people who code in that system to ensure that they don't stumble over each other. He focuses on things like quality, documentation, technical debt, stability, scalability, and release process.

The System Owner is not a bottleneck or ivory tower architect. He does not personally have to make all decisions, or write all code, or do all releases. He is typically a squad member or chapter lead who has other day-to-day responsibilities in addition to the system ownership. However, from time to time he will take a "system owner day" and do housekeeping work on that system. Normally we try to keep this system ownership to less than a tenth of a person's time, but it varies a lot between systems of course.

We also have a chief architect role, a person who coordinates work on high-level architectural issues that cut across multiple systems. He reviews development of new systems to make sure they avoid common mistakes, and that they are aligned with our architectural vision. The feedback is always just suggestions and input - the decision for the final design of the system still lies with the squad building it.

Henrik Kniberg & Anders Ivarsson, in Scaling Agile @ Spotify, 12-13.

Wednesday, May 24, 2017

Trade-off: a única medida razoável

Engenharia de software envolve decisões. Decisões sobre organização, decisões sobre processos e, principalmente, decisões sobre tecnologia. No caso dessas últimas,  as opções em cada nível costumam ser muitas, desde a escolha do sistema operacional até a escolha da plataforma de hospedagem, passando por linguagens de programação (ou mais precisamente línguas, em homenagem ao meu colega Evandro), frameworks, tipos de armazenamento de dados, entre outras coisas.

Nesse cenário, discutir cada opção torna-se extremamente valioso. Mas pode tornar-se também um mero exercício de reforço de preferências pessoais, dependendo da postura das pessoas envolvidas. Aliás, é bastante comum entrarmos numa reunião desse tipo e constatarmos que ela rapidamente começa a se assemelhar mais a um jogo de futebol, com cada um torcendo apaixonadamente pela vitória da sua tecnologia preferida, do que a uma avaliação científica e imparcial, como deveria ser.

Ter preferências é absolutamente normal, não se pode impedir que cada um se sinta mais confortável usando determinada tecnologia em lugar de outras. O problema começa quando essa inclinação pessoal interfere a tal ponto que o engenheiro ou desenvolvedor não consegue perceber os pontos fracos daquilo que gosta e defende. Consequentemente, ele passa também a não conseguir apreciar os pontos fortes de outras abordagens. Por fim, essa postura culmina numa incapacidade de discernir quando e como aplicar cada tecnologia a favor do objetivo do projeto. Com isso, o indivíduo acredita plenamente que as tecnologias de sua preferência servem para resolver qualquer problema, não importa as circunstâncias. Da mesma forma, menospreza todas as outras propostas, sem se dar ao trabalho de refletir a respeito. É nessas horas que ouvimos frases como "Pra quê discutir sobre isso? Basta usar X e pronto, está resolvido!" ou "Usar Y? Jamais!".

Recentemente, conversando o Myhro, meu colega de trabalho, sobre esse assunto, ouvi dele um comentário que sintetiza bem a minha opinião a respeito: em se tratando de escolha de tecnologias, o trade-off é a única medida razoável. Em outras palavras, não importa o quanto você goste de X ou Y, na hora de tomar decisões sobre qual tecnologia usar, é preciso recorrer à boa e velha lista de prós e contras, levando sempre em conta o resultado que deve ser atingido. Essa é a postura analítica e científica que se espera de todo bom profissional da área de computação.

Thursday, May 11, 2017

Think about behavior, not data

Databases are implementation details! Considering the database should be deferred as long as possible. Far too many applications are inextricably tied to their databases because they were designed with the database in mind from the beginning. Remember the definition of abstraction: the amplification of the essential and the elimination of the irrelevant.

Robert C. Martin, "The Payroll Case Study: Iteration One Begins", in Agile
Software Development: Principles, Patterns, and Practices
, 194.

The point is that, as far as the application is concerned, databases are simply mechanisms for managing storage. They should usually not not be considered as a major factor of the design and implementation. As we have shown here, they can be left for last and handled as a detail. By doing so, we leave open a number of interesting options for implementing the needed persistence and for creating mechanisms to test the rest of the application. We also do not tie ourselves to any particular database technology or product. We have the freedom to choose the database we need, based upon the rest of the design, and we maintain the freedom to change or replace that database product in the future as needed.

Sometimes the nature of the database is one of the requirements of the application. RDBMSs provide powerful query and reporting systems that may be listed as application requirements. However, even when such requirements are explicit, the designers should still decouple the application design from the database design. The application design should not have to depend on any particular kind of database.

Robert C. Martin, "The Payroll Case Study: Implementation", in Agile
Software Development: Principles, Patterns, and Practices
, 249.

Monday, May 1, 2017

Compromise, perfection and the Liskov Substitution Principle

There are rare occasions when it is more expedient to accept a subtle flaw in polymorphic behavior than to attempt to manipulate the design into complete LSP compliance. Accepting compromise instead of pursuing perfection is an engineering trade-off. A good engineer learns when compromise is more profitable than perfection. However, conformance to the LSP should not be surrendered lightly. The guarantee that a subclass will always work where its base classes are used is a powerful way to manage complexity. Once it is forsaken, we must consider each subclass individually.

Robert C. Martin, "LSP: The Liskov Substitution Principle", in Agile Software
Development: Principles, Patterns, and Practices
, 122.

Friday, April 28, 2017

Premature abstraction

Resisting premature abstraction is as important as abstraction itself.

Robert C. Martin, "OCP: The Open-Closed Principle", in Agile Software
Development: Principles, Patterns, and Practices
, 109.

Monday, April 24, 2017

The source code is the design

[...] programming is not about building software; programming is about designing software.

Jack Reeves, "What is Software Design?", C++ Journal, Vol. 2, No. 2, 1992.

I strongly recommend reading the entire paper.

Friday, April 21, 2017

Keeping the design as good as it can be

Agile developers [...] never say "We'll go back and fix that later." They never let the rot begin.

[...]

The attitude that agile developers have toward the design of the software is the same attitude that surgeons have toward sterile procedure. Sterile procedure is what makes surgery possible. Without it, the risk of infection would be far too high to tolerate. Agile developers feel the same way about their designs. The risk of letting even the tiniest bit of rot begin is too high to tolerate.

[...]

Professionalism dictates that we, as software developers, cannot tolerate code rot.

Robert C. Martin, "What is Agile Design?", in Agile Software Development:
Principles, Patterns, and Practices
, 94.

Symptoms of poor design

  1. Rigidity — The design is hard to change.
  2. Fragility — The design is easy to break.
  3. Immobility — The design is hard to reuse.
  4. Viscosity — It is hard to do the right thing.
  5. Needless Complexity — Overdesign.
  6. Needless Repetition — Mouse abuse.
  7. Opacity — Disorganized expression.

These symptoms are similar in nature to code smells, but they are at a higher level. They are smells that pervade the overall structure of the software rather than a small section of code.

Robert C. Martin, "Agile Design", in Agile Software Development:
Principles, Patterns, and Practices
, 85.

Monday, April 10, 2017

Refactoring

Every software module has three functions. First, there is the function it performs while executing. This function is the reason for the module's existence. The second function of a module is to afford change. Almost all modules will change in the course of their lives, and it is the responsibility of the developers to make sure that such changes are as simple as possible to make. A module that is hard to change ir broken and needs fixing, even though it works. The third function of a module is to communicate to its readers. Developers unfamiliar with the module should be able to read and understand it without undue mental gymnastics. A module that does not communicate is broken and needs to be fixed.

[...]

I can't stress this enough. All the principles and patterns in this book come to naught if the code they are employed within is a mess. Before investing in principles and patterns, invest in clean code.

Robert C. Martin, "Refactoring", in Agile Software Development:
Principles, Patterns, and Practices
, 31-42.

Sunday, April 9, 2017

The future of software engineering

The tar pit of software engineering will continue to be sticky for a long time to come. One can expect the human race to continue attempting systems just within or just beyond our reach; and software systems are perhaps the most intricate of man's handworks. This complex craft will demand our continual development of the discipline, our learning to compose in larger units, our best use of new tools, our best adaptation of proven engineering management methods, liberal application of common sense, and a God-given humility to recognize our fallibility and limitations.

Frederick P. Brooks, Jr., "The Mythical Man-Month after 20 Years", in The Mythical Man-Month (Anniversary Ed.), 288-289.

Thursday, April 6, 2017

Tests: beyond verification

I must admit that my first contact with software testing was not so long ago. Since them I've been increasingly impressed with how powerful a good suite of tests can be for verifying if we are delivering the expected product. Yet, as Uncle Bob well asserts, verification is just one of the benefits of writing tests. This is especially true when we're talking about TDD. There are at least four other (arguably more important) reasons that motivate writing tests first whenever possible:

  • Avoid regressions. A well constructed suite of tests gives much more freedom when we need to extend or refactor existing code. It gives us a safe guard against changes that can inadvertently break the working software.
  • Documentation. I've never thought about this before, but tests are a form of living and executable documentation. They have to stay current, otherwise they'll break. Besides, they show how to work with the code, instead of just inform how to do it. They're not only a form of documentation, but a very good one.
  • Better design. Writing tests forces us to think by the point of view of the caller of our code. Not only that, it forces us to write client code for our own code. This can have beneficial impacts in our architectural decisions because many issues related to the software interface become more evident as we write the tests (and ahead of time if we write them first, which is even better).
  • Better implementation. This topic is closely related to the previous one, but deserves its own section in face of the great improvement that it can represent for a software project. Citing Uncle Bob again: "the act of writing tests first forces us to decouple the software!". Tightly coupled software is bad. It's hard to maintain, hard to extend and often hard to understand. It's also very hard to test. This way, in order to test it we have to decouple it. The more testable it is, the more decoupled it is. And the more decoupled it is, the better designed and implemented it is.

For more details about these topics I strongly recommend reading chapter 4 of Agile Software Development: Principles, Patterns, and Practices. Words of wisdom from Robert Martin.

Tuesday, April 4, 2017

Individuals and interactions over processes and tools

Working well with others, communicating and interacting, is more important than raw programming talent. A team of average programmers who communicate well are more likely to succeed than a group of superstars who fail to interact as a team.

Robert C. Martin, "Agile Practices", in Agile Software Development:
Principles, Patterns, and Practices
, 4.

[...] the quality of the people on a project, and their organization and management, are much more important factors in success than are the tools they use or the technical approaches they take.

Frederick P. Brooks, Jr., "The Mythical Man-Month after 20 Years", in The Mythical Man-Month (Anniversary Ed.), 276.

Monday, April 3, 2017

List of books (1st edition)


  • Book cover
    Design Patterns: Elements of Reusable Object-Oriented Software

    A reference on the software design field, the famous GoF book is worth its status. It provides a solid ground for thinking about software projects, from the architecture to implementation and refactoring. Even if not all the patterns or their categorization are unanimously accepted (see, for example, Steve Yegge commentaries about the Interpreter pattern or the various discussions about the Singleton pattern on the web), you should know them in order to take part in the debates and decide when to apply them or not. The book organization is exceptional, which helps a lot in a book about patterns. A must-read for every software engineer.
  • Book cover
    The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition

    Is it possible that a book written 40 years ago, a 30-year paper and a review of both made 20 years ago are still relevant today? The Mythical Man-Month, Anniversary Edition proves that it is. The original content is full of good observations around the essential and accidental (in Fred Brooks' own words) aspects of the software task. The anniversary edition adds a copy of the excellent paper No Silver Bullet, published in 1985, besides new commentaries by the author. In a field where many things change fast, The Mythical Man-Month passes the test of time with great praise.
  • Book cover
    Swarm Intelligence

    The focus of this book is the PSO algorithm. Nevertheless, it also offers a good introduction, in the first part of the book, to evolutionary algorithms in general, covering historical and even some philosophical questions related to the subject. The second part shows the PSO algorithm itself and brings comments about the many ways to optimize it. The algorithm is short and simple, so it's not hard to understand (although they could have simplified it even more by using a different notation instead of greek letters). Some reviewers consider the first part as ramblings, but I have to admit I like it better. Anyway, the book as a whole offers a very good content.

Monday, March 27, 2017

The book is on the table. Bom, nem sempre, mas ele ainda é importante

Sempre gostei de ler. Desde pequeno, lembro de me encantar com as histórias nos livros. Com o tempo, passei a apreciar tipos variados de literatura, incluindo os livros técnicos. Quando entrei na universidade e tive acesso àquela enorme rede de bibliotecas dos diversos prédios, foi como se um mundo inteiro de conhecimento se abrisse diante de mim, inteiramente  à disposição. E até hoje, o acesso à biblioteca é uma das coisas que mais sinto falta das minhas épocas de graduação e mestrado.

De todo modo, o gosto pela leitura persiste e, de volta ao mercado, sigo buscando bons livros para me aperfeiçoar. Hoje me espanto com a quantidade de títulos disponíveis, muito graças à facilidade de distribuição que a Internet trouxe, revolução comparável à que Gutenberg causou na produção de livros com sua máquina de tipos móveis. A propósito, o gráfico alemão ficaria certamente admirado ao ver como a imprensa evoluiu desde sua invenção, chegando aos limites de podermos consultar milhares de livros num dispositivo que cabe na palma da mão.

Igualmente espantoso, porém, é notar o pouco interesse de colegas de profissão pela leitura de livros técnicos. De maneira geral, não vejo desinteresse por ler e se informar, mas noto que os livros não estão entre as primeiras opções de aquisição de conhecimentos. Blogs, artigos de sites, artigos científicos e tutoriais entram na frente. Por que isso acontece?

Entendo perfeitamente que, sendo a computação uma área extremamente dinâmica, é fácil acontecer de conteúdos escritos sobre assuntos mais específicos se tornarem rapidamente obsoletos e isso, consequentemente, desencorajar certos investimentos, pois ninguém quer ter a sensação de estar jogando tempo fora ao ler um livro de 700 páginas que pode estar ultrapassado daqui a seis meses. Entendo também que esse mesmo dinamismo pode colocar uma certa pressão nas escolhas do que vamos ler: tem tanta coisa sendo lançada e mudando o tempo todo que, pra quem deseja estar na "crista da onda", há material suficiente de leitura nos blogs e artigos para preencher praticamente todo o tempo disponível.

Ainda assim, me questiono se esses dois motivos são suficientes para reduzir a utilidade dos livros técnicos e relegá-los a segundo, terceiro ou quarto planos, como vejo acontecer com tanta frequência. Tenho a impressão de que o problema reside mais na escolha dos livros para ler do que na leitura de livros em si.

No caso do primeiro problema que citei, se, por um lado, existem livros que tendem a ser focados no momento, perdendo muito de seu valor dentro de meses ou anos, existem também aqueles que podemos considerar atemporais. Eles tocam em questões de base, pertinentes à essência de tarefas em computação. Com isso, possuem o poder de se conservar úteis mesmo depois de anos ou décadas, ainda que algumas de suas partes fiquem desatualizadas. Esse poder compensa o esforço da leitura e retribui o investimento de tempo, por maior que seja. Não por acaso, considero que esse mesmo tipo de livro tem o poder de se colocar em patamar de igualdade (se não superior) com os conteúdos da "crista da onda". Isso porque todo conteúdo recente, toda novidade, partiu de construções anteriores. Assim, entender essas construções e, principalmente, a base comum a elas, ajuda imensamente a entender o que as novidades propõem, quais benefícios trazem em relação ao que existe e quais dificuldades acarretam (afinal, there is no free lunch).

Existe, portanto, um tipo de livro capaz de fazer frente aos impeditivos mencionados anteriormente: o livro atemporal, de base (alguns usariam o termo "clássico", mas não creio que essa palavra comunique exatamente o quero dizer). E esse tipo de livro ainda é importante, ainda vale a pena ser lido, seja no exemplar impresso ou em mídia digital. Ao fazer essa afirmação, não tenho a intenção de diminuir a relevância de conteúdos publicados em blogs, artigos e tutoriais (este texto, inclusive, faz parte de tal conjunto). Apenas exorto colegas de profissão a considerarem o quanto podem ganhar retomando os laços com esse antigo amigo: o livro.

Sabendo, contudo, que exortações raramente funcionam tão efetivamente como o exemplo, pretendo adotar a mesma postura de Fred Brooks [1] e concentrar esforços no "como", isto é, comentando a respeito de bons livros que, em minha opinião, se encaixam na categoria que descrevi. É o que virá em futuros posts.

[1]
Frederick P. Brooks, Jr.. The Other Face. In: ______. The Mythical Man-Month
(Anniversary Ed.)
. Boston: Addison-Wesley, 1995. Cap. 15, p. 164.