Domain-Driven Design (DDD) gives us the most real-world approach to defining what really has business value. Coupling it with Onion Architecture, we can achieve a good level of layers communication and abstract the technology specifications that are secondary to the business. The best way to explain Domain Storytelling is to see it in action, as well as any architectural pattern (like Onion) can be properly introduced via a clean and clear example. So, I’d like to present a lightweight blueprint that can be used by agile teams as guidance for designing large-scale development programs. Furthermore, it’s been over three years, as I’ve designed, built and maintained two projects using DDD and multi-layered architecture, so you can get your hands on ready-and-working enterprise design schema and code solutions.
Let’s start with business needs. Nowadays, if we are talking about some complex service solution, it’s always about services. When you have separate services that communicate with each other over the network, you have a distributed system. In these systems, choosing the wrong integration strategy might cause slow or unreliable systems that lead to negative business impacts. That’s why it’s very important for the Architect and Developers to understand the best approaches to building scalable distributed systems.
Having to distribute a system introduces the need to break it up into smaller deployable components. This is a moment where you understand that DDD is a most fitting design and perfect choice in cases with complex or ‘going-to-be’ complex systems.
Well, it’s time for the first Schema.
Some of you can recognize the standard Onion Architecture diagram introduced by Jeffrey Palermo in 2008. The idea of the Onion Architecture is to place the Domain and Application Layers at the center of your application. And the Presentation and Infrastructure layers are located in external circles. The main point of this architecture is data flow direction. From the UI through our Application Service to our Domain Model and back to the outside. It can be compared with cross-cutting the onion, but with dry eyes in this case. It is important to notice that all dependencies move inwards. The layers inside the application core define the interfaces that the outer layers can use. The outer layers thus depend on the inner layers interfaces.
Onion is an architectural pattern for a system, whereas DDD is a way to design a subset of the objects in the system. Therefore in my practice, Onion architecture is the best among others to be coupled with DDD design and to integrate DDD principles in the real-life projects. Let’s see how we can move from the 2-D flat circle to the 3-D representation. ~~Magically~~ circle turns to cylinder.
Mapping the implementation model back to the analysis model and ensuring they are bound to one another is hard. To guide developers and clarify designs, Evans has built upon the domain model pattern that was first cataloged in Martin Fowler’s book Patterns of Enterprise Application Architecture. Those patterns used to create domain models and tie implementation to analysis have continually evolved since Evans’s original text.
And today I shall introduce you to my implementation of a highly scalable autonomic distributive API, that effectively adapts DDD patterns with ‘Onion’ architecture. I know, it’s not simple from the first glance, but we’ll get through each layer together step by step and at the end it will be clean and clear.
DDD is definitely NOT about technology. DDD is all about the domain. As Eric Evans defines a domain layer in a project:
This layer is the heart of business software.
Let me describe domain layer of the enterprise project blueprint in detail:
It’s an acronym which stands for Data Transfer Object. Although the main reason for using a Data Transfer Object is to batch up what would be multiple remote calls into a single call, it’s worth mentioning that another advantage is to encapsulate all simple request data.
Understandably, there we keep exceptions, which we use in a project.
Repository is a critical part of the entity lifecycle that allows us to persist domain entities. In the domain layer we create a contract for application or domain services. Using that contract each service can access domain models directly to find the needed information. You could create a generic repository interface, defining all kinds of common methods. At this moment you are possibly asking yourself: and specifically in what place in schema I can see Repository Interface? Nowhere. But let’s look at Domain Services and we are going to get to the truth.
These one encapsulates domain logic and concepts that are not naturally modeled as value objects or entities in your model. Domain services have *no identity or state*; their responsibility is to orchestrate business logic using entities and value objects. If the domain service implementation requires some specific technology (for example database access, or SMTP server access or whatever), its implementation must live in the infrastructure layer. The domain simply doesn’t care about implementations, if the business experts talk about something and we decide to make this “something” a contract, it must live in the Domain layer. Therefore Repository Interface is a Domain Service and Repository is an Infrastructure Service.
An entity represents a concept in your domain that is defined by its identity rather than its attributes. Although an entity’s identity remains fixed throughout its lifecycle, its attributes may change. An example of an entity is a User. It’s unique identity won’t change once it is set but its name, address, etc., can be altered many times. Entities are mutable as the attributes can change.
Value objects (VO)
VO represent the elements of your domain that are known and defined only by their attributes. Value objects don’t need identity because they are always associated with another object and are therefore understood within a particular context. For instance, you may have an User entity that uses value objects to represent the user address, account information and so on. Not one of these characteristics needs identity itself because it only has meaning within the context of being attached to an order. An User Address that is not attached to User does not have meaning. Because they are defined by their attributes, value objects are treated as immutable. That means, once constructed, they can never alter their state.
I’ve provided in the project the following data flow bus, that elaborate relationship between domain elements: create DTO from request data, create VO from the DTO and persist Entity, initializing from the VO. And backwards: fetch Entity, create VO and return DTO in Response.
signify something that has happened in the problem domain that the business cares about. You can use events to record changes to a model in an audit fashion, or you can use them as a form of communication across aggregates. Often an operation on a single aggregate root can result in side effects that are outside the aggregate root boundary. Other aggregates within the model can listen for events and act accordingly. We are using Doctrine lifecycle events and it’s a perfect example of Domain Events.
This layer is responsible for the navigation between the UI and other layers in the bounded context as well as the interaction with application layers of other bounded contexts. There we can perform the basic (non business related) validation on the user input data before transmitting it to the other (lower) layers of the application. And it should not contain any business or domain related logic. But it definitely should perform tasks from the UI request via services.
Typically, the data received by your application will be some flavor of DTO. Having successfully parsed the command into types the domain model understands, the command is executed in the domain, which may still reject the command on the grounds that it would violate the business invariant (the account doesn’t exist yet, the account is blocked, etc.). In other words, the business validation is going to happen in the domain model, after the application layer validates the inputs. The implementation of the validation rules will normally live either in the constructor of the value type, or within the factory method used to construct the value type. Basically, you restrict the construction of the objects so that they are guaranteed to be valid, isolating the logic in one place, and invoking it at the process boundaries.
It’s necessary to use providers to elaborate with our framework for orchestrating services in containers.
This structure is semantically similar to the application service layer, because here we store handlers that operate with commands (or task requests in our case) and call services according to task. Command itself is a business intention, something you want a system to do. Keep the definition of the commands in the domain. Technically it is just a pure DTO and DTO is what we have for our commands. The name of the command should always be imperative, like “CreateUser”, “DeleteScenario”. One command is handled only by one command handler.
There is a benefit to having event handlers that live in the application service layer in addition to those that live in the domain. These event handlers tend to carry out infrastructural tasks like sending e‐mails. Note that these handlers are not part of UL or the domain.
One important responsibility of application service layer handlers is that they trigger communication with external bounded contexts. We have Notifiers (sms, email) as an example of Event Handler.
We are using them as orchestrators that receive a request from the client, look up the object(s) that know how to handle the request, ask them to handle the request, store the result and send back a response to the client. The application services should not need to make any decisions themselves.
Use Case in general is a business scenario and DDD defines it this way in theory and as a pure analog with Application Service in diagram or code structure.
Although, in practice, with real code in real projects, we have to build a bridge between UI data and an application service. We additionally have to:
- Firstly, integrate the middleware layer (application validation check, do authorization check, data access (permissions to perform the Use Case) check, etc.) and initialize Use Case with data. In general, we need to assemble UI data into Use Case. So, I call it UseCaseAssembler.
- Secondly, call service according to use case purpose, transform data, returned by service into Use Case Response and pass it to the UI layer. It’s definitely Use Case itself.
It can be described by shared libraries for Domain, Application, and UI layers. As you can probably notice from schematic, this layer also communicates with external systems. What shall we have here?
If the creation of an entity or a value object is sufficiently complex, you should delegate the construction to a factory. A factory ensures that all invariants are met before the domain object is created. You can also use factories when re‐creating domain objects from persistent storage. And in our case it would be a builder, because DTO<->VO<->Entity transformation goes in here and we are building those domain elements.
A domain model needs a methodology for persisting and hydrating an aggregate. Because an aggregate is treated as an atomic unit, you should not be able to persist changes to an aggregate without persisting the entire aggregate. A repository is a pattern that abstracts the underlying persistence store from the model allowing you to create a model without thinking about infrastructure concerns.
Services in the infrastructure layer are services which implement an interface from the domain layer. In the domain layer, you define an interface with actions we want to have.
This is an example of real-world needed infrastructure implementation. We keep here custom ORM types that we are using in our Entities.
It’s my own extended version of a standard CQRS command request. Our service has two types of API requests: external “usecase” type and command request type, that can be external and internal. The second type can be managed differently, depending on it’s goal. It can be a simple external request to handle a command. Another time it can be a complex task with transaction in it. So we should use a different approach for every version of the task according to the configuration pre-set. Either way we create a task request and manage it using a message broker. You can overlook it in my specific article if you are interested in this topic.
Presentation Layer (UI)
A UI Layer is responsible for presenting information to the user and interpreting user commands.So it should translate incoming requests to method calls for the application layer and in the case of web services, translate the return values to responses. It may also contain Controller classes as in classical MVC.
We are using GraphQL, so I am going to explain this Layer in example how we deal with requests and responses. We have one endpoint and specific types for input parameters and output data. So we can create a request from the input query and return a response after resolving this request. All of it takes place in a specific for each Use Case Resolver.
Our Schema is a data contract. We’re telling anyone who uses this API, “hey, this is going to be the format that you can always expect to see from this API call”. And you are going to need a Presenter to perform complex logic for composing data from entities in case you need it. More about UI on GraphQL.
So, we almost did it. I mean, if someone had provided me the full example of an enterprise project three years ago, I would be more than happy. But no, it was not gonna happen then, but now I have one for you.
I’ve described all context directories, except one. And this one hidden gem is the Shared Kernel. It’s a way of cross-context communication in which two or more contexts share a common model. You can learn all other ways in Vaughn Vernon’s IDDD book (such as Customer Supplier, Conformist, Partnership and other). In my project I prefer to use the Shared Kernel concept linked with Anti Corruption Layer on one hand and Partnership relationship on the other. The first one combined approach will guarantee the integrity of models via read only access and usability. And Partnership is convenient to use via messaging patterns, that I described earlier.
When integrating your bounded contexts, it’s important to get an idea from the business what its nonfunctional requirements are so that you can choose an integration strategy that lets you meet them with the least amount of effort. Some options, such as messaging, take more effort to implement, but they provide a solid foundation for achieving high scalability and reliability. On the other hand, if you don’t have such strong scalability requirements, you can integrate bounded contexts with a small initial effort using database integration. You can then get on with shipping other important features sooner and integrate CQRS later on. Unfortunately, you can’t just ask the business how scalable a system needs to be or how reliable it should be. In my practice, trying to give them both scenarios and explaining the costs of each scenario is the best approach.
Thank you, all, who have read this “long read”! I am hoping, that you found it useful. Let’s always try to keep our code clean and clear and yourself safe and sound as well.