On onion (cone?) architecture

High level modules should not depend on low level modules. Both should depend on abstractions.

The first three circles starting with the central Domain define the application core. Each of these is a collection of high level modules.

The outer-most blue circle is about the details. This is where everything that’s less important goes. This includes all the tools, frameworks and models that support the application core. […]

The green circle is the glue that connects the application core to the details that support it. Adapters are user code implementations that are specific to various details in the blue circle and that hide these away from the inner core circles.

[You define adapters in the green circle, reusing technology details in your blue circle and calling Presentation service from the inner yellow circle. But what if your app also needs to use a delivery framework detail?]

Every piece of code in the core that needs some external service defines an abstraction for that service. […] This abstraction is called a port.

Later, an adapter for the port is defined in the green circle […]. [It] implements the core abstraction and forwards the calls to the specific technology in the blue circle.

via Cone Architecture — Adrian Bontea

I’d still call this “onion architecture” as I think it must remain very clear that the core is to be hosted inside of any executable package, but the points of the linked article are all valid. And indeed, from my point of view too, core reusability is a must for all successful software projects – since their very first version, or even their PoC.

Of course, following the dependency inversion principles in real projects is really difficult. Both because many times it’s not intuitive (especially if you are really tech-focused and you know very well what certain contemporary technologies can and cannot do and you are eager to use one or another to implement what you need) and because you could often end up with thick (or very thick) adapters, for which you’ll need to trade off their implementation cost and the benefits of better core reusability.

The first issue is usually resolved by just sitting down with a pen and a piece of paper and thinking deeply while you draw your diagrams.

For the second, the things are more difficult because many times this would seem to increase costs. But do note that reusability is implicitly required if you’d have a long term, successful project because technologies – i.e. details – will change over time outside of your team’s control. And you don’t want your project to become legacy too early, as the cost of continuing to maintain it into the “new world” would have become too high. (Think about the recent rise of the cloud, mobile devices, and maybe that of the 3D holography in the future, and about the applications that you needed or you’ll need to rewrite from scratch to ensure they’ll live on.)

Moreover, if you do multi-platform (rather than cross-platform) development, reusing the core concepts (i.e. they might need rewiring in multiple different core programming language) is a must from the beginning. (Think about having to write the same logic in C#, JavaScript, Swift, and Kotlin; having the core domain, application, and some generic presentation services extracted – maybe with pseudo code and some agnostic modeling tools – makes all these conversions a lot easier.)

Finally, it’s OK if you’ll also get multiple other partial onions within some services, such as presentation or adapters. (For example, if your main domain includes a hierarchy of objects, and your presentation services expose visibility values for items based on expansion/collapsing parent items, that part may be extracted out into yet another separate domain that is unrelated to the main one: a domain of tree nodes with expansion state, and items with computed visibility under them, reused by main presentation servies for preparing the main domain hierarchy for user output, and does this in a generic fashion, that later can be further adapted to 2D screens, large or small, read by Alexa devices, or exposed in a 3D diagram.)

Or maybe you’d want to also remove the dependencies between the core services themselves – i.e. having the application and generic presentation services totally disconnected, using an app controller that would just manage the required interactions and orchestrating everything (acting as more than adapters), and overall you’d still benefit of the same dependency inversion idea that brings life to the onion concept (or be it cone, if you wish, to get better mnemonics, as Adi advocates?)

About Sorin Dolha

My passion is software development, but I also like physics.
This entry was posted in Architecture and tagged , , . Bookmark the permalink.

3 Responses to On onion (cone?) architecture

  1. codeinfig says:

    How is this not architecture over efficiency? Seems like someone trying to lower the efficiency of the cpu for the sake of organisation or maintainability– and while the costs are almost definite (youre increasing the distance between the module and the cpu, you might as well increase the distance between transistors) the promises in return are not.

    Of course I could be wrong, so please– let me know how I misunderstood. Software gets slower faster than hardware gets faster; no matter how many cores, how fast a clock, how much ram, software authors will fill two of them.

    Dont get me wrong, I write some pretty leisurely software. But I dont go telling people thats the best way to do it, just the easy way; and they are always free to make it go faster.

    • Sorin Dolha says:

      Hey there, thanks for the comment!

      I’ll start by saying I do like assembly language too! And actually I like physics as well! 🙂 But without organising things from those levels up, from electronics to processors, then to reusable software modules and other logical grouping, without OOP and functional programming, I think we couldn’t have obtained any of the complex (hardware and) software tools that we use every day, nowadays!

      Of course, over-architecturing is a possibility that you would always want to avoid. But in my opinion for fairly complex projects (i.e. most contemporary projects), you do need things well modularised, such as using either layered or some kind of onion architecture. And you would also do many other trade-offs while implementing such systems, including code optimizations to overcome hardware limitations, anyway. But a proper software architecture for your project allows higher complexity without losing functional quality (especially during long term project maintenance periods), while of course it comes with increasing the initial solution development time and maybe decreasing the runtime performance of the final product with a certain level that you want, of course, to minimise.

      But since Adi’s and my posts are about onion architecture compared to a layered system, I think we should focus here mostly on the performance penalties between the two. And from my perspective, I feel that they are negligible, if any. Simply because inverting dependencies is not something that changes the way things run when the executable process uses those transistors; the changes are just in the way the original source code is organised, how contracts (and ports) are defined to support it, and generally, it will only be slightly more memory used when running the onion system vs. the layered app, while everything will be forgotten after linking the modules and generating the deployment.

      What I strongly agree, however, is that developers (including a younger me, I admit) do assume that hardware gets faster faster than it actually does. But that doesn’t depend on the architecture we choose for the systems we create; I think it’s a general thing instead: as humans, we are usually just too optimistic. But either the hard way (by experience) or with proper education, eventually we all learn the truth: we do need to consider all hardware limitations when we design and develop software, or else a project could fail in a miserable way!

      • codeinfig says:

        I guess the most important thing is that youre taking all necessary factors into account, not just a single point of streamlining or organisation. Thanks for explaining.

Add a reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s