Generic repository pattern using Dapper

Generic repository pattern using Dapper

Implementing Repository pattern in Dapper can be done in many ways. In this blog I will explain what repository pattern is and how to implement it using reflection in C#.

When searching around the web I found various implementations none of which was satisfactory — mostly due to having to manually enter table names/field names, too complex implementations with Unit of work etc.

Similar repository, as presented here, is in use in production CQRS/ES system and works quite well.

Even tho use case depicted here is quite “unique”, I think implementation of this repository can be applied for most of the relational base database systems, with minimum refactoring.

What is Repository pattern?

When talking about Repository pattern it is important to distinguish between DDD implementation of repository and generic repositorypattern.

Generic repository is simple contract defined as an interface on per object basis. That means that one repository object is related to one table in database.

DDD repository pattern works with aggregate root object and persists it one or more tables or if event sourcing is used as series of events in event store. So in this instance, repository is actually related not to one database but to one aggregate root which can map to one or more tables. This is a complex process due to impedance mismatch effect which better handled with ORM’s, but this is not our use case.

Generic repository UML diagram:

  • GenericRepository abstract class implements IGenericRepository interface. All shared functionality is implemented in this class.
  • ISpecificRepository interface should have methods required for specific use case( if any)
  • SpecificRepository class inherits from GenericRepository abstract class and implements any specific methods from ISpecifiRepository.

Unit Of Work and transaction handling

Unit of Work pattern implements single transaction for multiple repository objects, making sure that all INSERT/UPDATE/DELETE statements are executed in order and atomically.

I will not be using Unit Of Work but rather save each repository directly using Update/Insert method. The reason for this is that these repositories are designed toward a specific use case detailed below.

All transaction handling is done manually by wrapping multiple repository command into .NET Transaction object. This gives a lot more flexibility without adding additional complexity.

Repository implementation

Let us define interface first:

public interface IGenericRepository<T>
    {
        Task<IEnumerable<T>> GetAllAsync();
        Task DeleteRowAsync(Guid id);
        Task<T> GetAsync(Guid id);
        Task<int> SaveRangeAsync(IEnumerable<T> list);
        Task UpdateAsync(T t);
        Task InsertAsync(T t);
    }

Bootstrap code for repository class has the responsibility to create SqlConnection object and open the connection to database. After that, Dapper will utilize this connection to execute SQL queries against database.

public abstract class GenericRepository<T> : IGenericRepository<T> where T: class
    {
        private readonly string _tableName;

        protected GenericRepository(string tableName)
        {
            _tableName = tableName;
        }
        /// <summary>
        /// Generate new connection based on connection string
        /// </summary>
        /// <returns></returns>
        private SqlConnection SqlConnection()
        {
            return new SqlConnection(ConfigurationManager.ConnectionStrings["MainDb"].ConnectionString);
        }

        /// <summary>
        /// Open new connection and return it for use
        /// </summary>
        /// <returns></returns>
        private IDbConnection CreateConnection()
        {
            var conn = SqlConnection();
            conn.Open();
            return conn;
        }

        private IEnumerable<PropertyInfo> GetProperties => typeof(T).GetProperties();

Make sure you have connection string named MainDb. I am using MSSQL LocalDb, lite MSSQL version database provided with Visual Studio.

<add name="MainDb"
connectionString="Data Source=(localdb)\mssqllocaldb;Integrated Security=true;Initial Catalog=dapper-examples;"
providerName="System.Data.SqlClient"/>

Implementing most of these methods, except for Insert and Update, are quite straightforward using Dapper.

public async Task<IEnumerable<T>> GetAllAsync()
        {
            using (var connection = CreateConnection())
            {
                return await connection.QueryAsync<T>($"SELECT * FROM {_tableName}");
            }
        }

        public async Task DeleteRowAsync(Guid id)
        {
            using (var connection = CreateConnection())
            {
                await connection.ExecuteAsync($"DELETE FROM {_tableName} WHERE Id=@Id", new { Id = id });
            }
        }

        public async Task<T> GetAsync(Guid id)
        {
            using (var connection = CreateConnection())
            {
                var result = await connection.QuerySingleOrDefaultAsync<T>($"SELECT * FROM {_tableName} WHERE Id=@Id", new { Id = id });
                if (result == null)
                    throw new KeyNotFoundException($"{_tableName} with id [{id}] could not be found.");

                return result;
            }
        }

        public async Task<int> SaveRangeAsync(IEnumerable<T> list)
        {
            var inserted = 0;
            var query = GenerateInsertQuery();
            using (var connection = CreateConnection())
            {
                inserted += await connection.ExecuteAsync(query, list);
            }

            return inserted;
        }

 

For SaveRangeAsync list of items is provided which are saved to database and returns number of items saved. This can be made atomic by wrapping foreach in Transaction object.

Implementing Insert and Update queries

Implementing insert and update requires a bit more work. In general idea is to use reflection and extract field names from model class and then generate insert/update query based on field names. Field names will be used as parameter names for Dapper therefore it is important to make sure that DAO class field names are the same as column names in actual table.

In both cases the idea is the same: take object, provided as input parameter, and generate SQL query string with parameters. The only change is that different query is generated, INSERT or UPDATE.

Both methods use reflection to extract field names from model object. This class can be made static, since its not using any of instance variables and for performance purposes.

private static List<string> GenerateListOfProperties(IEnumerable<PropertyInfo> listOfProperties)
        {
            return (from prop in listOfProperties let attributes = prop.GetCustomAttributes(typeof(DescriptionAttribute), false)
                where attributes.Length <= 0 || (attributes[0] as DescriptionAttribute)?.Description != "ignore" select prop.Name).ToList();
        }

 

What this does is extracts a list of attribute names into List<string> using reflection. It will not extract fields marked with ignore description attribute.

Once we have this list we can iterate it and generate actual query:

public async Task InsertAsync(T t)
        {
            var insertQuery = GenerateInsertQuery();

            using (var connection = CreateConnection())
            {
                await connection.ExecuteAsync(insertQuery, t);
            }
        }

private string GenerateInsertQuery()
        {
            var insertQuery = new StringBuilder($"INSERT INTO {_tableName} ");
            
            insertQuery.Append("(");

            var properties = GenerateListOfProperties(GetProperties);
            properties.ForEach(prop => { insertQuery.Append($"[{prop}],"); });

            insertQuery
                .Remove(insertQuery.Length - 1, 1)
                .Append(") VALUES (");

            properties.ForEach(prop => { insertQuery.Append($"@{prop},"); });

            insertQuery
                .Remove(insertQuery.Length - 1, 1)
                .Append(")");

            return insertQuery.ToString();
        }

 

Update method has some small differences:

public async Task UpdateAsync(T t)
        {
            var updateQuery = GenerateUpdateQuery();

            using (var connection = CreateConnection())
            {
                await connection.ExecuteAsync(updateQuery, t);
            }
        }

private string GenerateUpdateQuery()
        {
            var updateQuery = new StringBuilder($"UPDATE {_tableName} SET ");
            var properties = GenerateListOfProperties(GetProperties);

            properties.ForEach(property =>
            {
                if (!property.Equals("Id"))
                {
                    updateQuery.Append($"{property}=@{property},");
                }
            });

            updateQuery.Remove(updateQuery.Length - 1, 1); //remove last comma
            updateQuery.Append(" WHERE Id=@Id");

            return updateQuery.ToString();
        }

 

Additional thing we need to take care of here is what happens if record for updating is not found. There are couple of solutions for this, some include throwing an exception others returning empty object or somehow notifying calling code that update was not done.

In this case we are relying on Dappers executeAsync method which return int which is a number of affected rows.

Example of generic repository usage:

public static async Task Main(string[] args)
        {
            var userRepository = new UserRepository("Users");
            Console.WriteLine(" Save into table users ");
            var guid = Guid.NewGuid();
            await userRepository.InsertAsync(new User()
            {
                FirstName = "Test2",
                Id = guid,
                LastName = "LastName2"
            });


            await userRepository.UpdateAsync(new User()
            {
                FirstName = "Test3",
                Id = guid,
                LastName = "LastName3"
            });


            List<User> users = new List<User>();

            for (var i = 0; i < 100000; i++)
            {
                var id = Guid.NewGuid();
                users.Add(new User
                {
                    Id = id,
                    LastName = "aaa",
                    FirstName = "bbb"
                });
            }

            var stopwatch = new Stopwatch();
            stopwatch.Start();
           
           
            Console.WriteLine($"Inserted {await userRepository.SaveRangeAsync(users)}");

            stopwatch.Stop();
            var elapsed_time = stopwatch.ElapsedMilliseconds;
            Console.WriteLine($"Elapsed time {elapsed_time} ms");
            Console.ReadLine();
        }

 

Use case in CQRS/ES Architecture

CQRS stands for Command Query Responsibility Segregation, and is an architectural pattern, which separates read model from write model. The idea is to have two models which can scale independently and are optimized for either read or write.

Event Sourcing(ES) is a pattern which states that state of the object is persisted into database as list of events and can be later reconstructed to the latest state by applying these events in order.

I will not go into explaining what these two patterns are and how to implement them, but rather focus on specific use case I’ve dealt with in one of my projects: How to use relational database (MSSQL) for read model and event store, utilizing data mapper Dapper and Generic repository pattern. I will also touch, albeit briefly, event sourcing using the same generic repository.

Example architecture:

Before going any further let us consider why using data mapper would be more beneficial than using ORM for this particular case:

Impedance Mismatch in Event Sourcing

An object-relational impedance mismatch refers to a range of problems representing data from relational databases in object-oriented programming languages.

Impedance mismatch has a large cost associated. Reason for this is that developer has to know both relational model as well as object oriented model. Object Relational Mappers(ORMs) are used to mitigate this issue but not eliminate it. They also tend to introduce new problems: like virtual properties requirement by EF, private properties mapping issue, polluting domain model etc.

When using using Events as storage mechanism in Event Store, there is no impedance mismatchThe reason for this is that events are domain concept and are persisted directly in Event store without any need for object relational mapping. Therefore, need for using ORM is minimal, and using Dapper/Generic repository becomes more practical.

Database model considerations

In this use case MSSQL will be used for both write and read sides, which adds to re-usability for dapper repository since it can be used on both read and write sides.

Primary key consideration

In this example I used Guid (.NET Guid and uniqueidentifier MSSQL datatype) as primary key identifier. It could have been something else like long or int, or string.

In any case this would require some additional work on Dapper repository. First, interface would need to change to accept additional primary key type. Then, depending on the type, there might be some additional work to modify queries generated.

Having more than one column as primary key would also imply some additional work and in this case using dapper/generic repository pattern would probably counter productive. We should opt for using full blown ORM in this case!

Bulk inserts with Dapper

Dapper is NOT suitable for bulk inserts, ie. performing large number of INSERT statements. The reason is that ExecuteAsync method, internally will use foreach loop to generate insert statements and execute them. For large number of records this is not optimal, and I would recommend using either SQL Bulk copy functionality or Dapper extension which allows bulk copy(its commercial extension) or simply bypassing dapper and working with database directly.

Transactions handling

Use case for applying transaction is when saving into more than one table atomically. Saving into event store can be this example: save into AggregateRoot table and Events table as one transaction.

Transactions should be controlled manually either on Command level (in CQRS Command implementation) or inside repository.

This example with two tables is inspired by Greg Young’s design which can be found here: https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf

using (var transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
            {
                // Fetch Aggregate root if exists, create new otherwise
                var aggRoot = await _aggregateRootRepository.FindById(aggregateId.Identity);
                if (aggRoot == null)
                {
                    aggRoot = new AggregateRootDao
                    {
                        Id = Guid.NewGuid(),
                        CreatedAt = DateTime.Now,
                        AggregateId = aggregateId.Identity,
                        Version = 0,
                        Name = "AggregateName"
                    };

                    await _aggregateRootRepository.InsertAsync(aggRoot);
                }
                else
                {
                    if (originatingVersion != aggRoot.Version)
                        throw new EventStoreConcurrencyException($"Failed concurrency check for aggregate. Incoming version: {originatingVersion} Current version: {aggRoot.Version}");
                }

                // Optimistic concurrency check
                var domainEvents = events as IDomainEvent[] ?? events.ToArray();

                foreach (var e in domainEvents)
                {
                    // Increment Aggregate version for each event
                    aggRoot.Version++;
                    e.Version = aggRoot.Version;

                    // Store each event with incremented Aggregate version 
                    var eventForRoot = new EventDao()
                    {
                        CreatedAt = e.CreatedAt,
                        AggregateRootFk = aggRoot.Id,
                        Data = JsonConvert.SerializeObject(e, Formatting.Indented, _jsonSerializerSettings),
                        Version = aggRoot.Version,
                        Name = e.GetType().Name,
                        Id = e.Id,
                    };

                    await _eventsRepository.InsertAsync(eventForRoot);
                }

                // Update the Aggregate
                await _aggregateRootRepository.UpdateAggregateVersion(aggRoot.Id, aggRoot.Version);


                transaction.Complete();
            }

 

If aggregate root is not found, it is created and inserted in AggregateRoot table. After that each event is converted to domain event and saved into Events table. All this is wrapped in transaction and will either fail or succeed as an atomic operation. Note that transaction has TransactionScopeAsyncFlowOption.Enabled option, which allows transaction to invoke async code inside its body.

Conclusion

Implementation here can be further optimized for use in CQRS/ES systems however that is outside of the scope of this post. This implementation gives enough flexibility to extend required specific repository with new functionality easily, just by adding custom SQL queries to the repository.

Author: Damir Bolić

Domain Driven Design and the art of Pushing Back

Domain Driven Design and the art of Pushing Back

Closing thoughts

In order to successfully practice domain driven design, developers can’t just be given requirements. They need to be actively involved in refining business processes and suggest changes to them. Period.

Author: Anes Hasičić

TDD & Pair Programming in practice

TDD & Pair Programming in practice

When I was a Java software engineer with a few months of experience I encountered TDD only through participating in a few workshops or from books & blogs. I thought “yeah, yeah, it is all nice in theory, but who really uses TDD in practice”? C’mon, really? Just do a story or a task as fast and as good as you can, maybe write some integration or unit tests, maybe leave it “for later” … meh… who really cares… It’s a legacy project so we don’t want to put too much effort into it and hopefully we’ll rewrite it soon into a better technology and/or framework. What about Pair programming? LOL. No way am I going to work with some random dude all the time. What if the person annoys me or they will not want to do anything, or even worse, what if they are so much better and faster that they will not want to explain anything and I will not get the chance to learn? Also, who has the responsibility for the story in pair programming?

All these were my prejudice & questions about the aforementioned concepts. In this blog, I’m going to tell you how working on a Belgian public sector project has given me an opportunity to work in pair & made me disciplined about writing tests before the implementation.

TDD

TDD (Test Driven Development) is, simply put, a way of developing where we first write a test and the implementation comes afterwards. To avoid repeating or copying definitions, I’ve provided some useful links for you to check out, and the rest you can find easily on the Internet.

Here you can watch what Uncle Bob has to say about it:

https://www.youtube.com/watch?v=AoIfc5NwRks

If you wish to practice TDD, there are some cool examples on the web like this:

https://osherove.com/tdd-kata-1

Also, it’s always nice to read a book:

https://www.goodreads.com/book/show/387190.Test_Driven_Development

Pair programming

Pair programming is, in short, two people developing at the same time. This means that one developer types the code while the other one gives suggestions and warns about possible bugs in the code. Usually they switch in code writing and assisting. Sometimes one writes a test and the other one writes the implementation or one writes code for the first half of a working day and the other one writes for the second half of the day. It all depends on how two people like to work, but the idea is that both members of the pair are focused on the work at the same time in order to produce quality and bug-free code.

You can read more about pair programming here:

http://www.extremeprogramming.org/rules/pair.html

https://www.codementor.io/pair-programming

How does it look in practice?

Way back, when I had only a few months of experience as a Java web developer, I didn’t even know there was a different way of programming than: “do the implementation and after that make sure it works by writing tests for it”. The first time I heard about TDD was during a Spring framework introduction workshop I attended, but I wasn’t familiar with pair programming until I started working at my current company Tacta.io. There I’ve been told that such an approach leads to better code quality & a smaller number of bugs in the code. Ok. I understood that TDD is in theory the “right” way to develop code, but I was skeptical at first if people really develop in a TDD way and how pair programming even works or helps.

My first real experience with TDD and pair programming happened while working on a big project with around 1M 890k lines of code (1M 007k were the test code). We are also talking about the project that had its first commit in 2010 after 2 failed developing attempts that hadn’t followed the concepts mentioned earlier. So how do you work on that kind of project and keep it maintainable for all these years?

Some of the most important ways to help keep the code base maintainable is not only using TDD & pair programming, but also developing according to clean code and DDD (Domain Driven Design) principles. All this pays off in the long run, especially when you have complicated domain and business logic in the project.

Alongside things mentioned above, we also have a strict development process that we follow:

  • Daily standups – say what you did yesterday, what are you doing today & is something blocking you
  • Pair modeling – a process of creating a document that two people wrote with functional & technical summary of a story that needs to be implemented
  • Story kickoff – after the pair modeling, a story is divided into tasks and presented to the team. The team asks questions and also gives suggestions if something can be done better
  • Pair programming – two developers actively develop a story, one takes a lead on the story which means he/she will work on it until it’s done, and the other member of the pair changes on the daily basis
  • TDD – we strongly try to keep the discipline of writing tests first. Unit tests usually go first and the implementation afterwards. Production code is also backed up with integration and end-to-end tests
  • DDD – while developing, we should be aware that the project must follow DDD concepts and rules
  • Proxy check – check the story with proxies (proxies are people who talk to users and make sure that stories they wrote are functionally OK)
  • Merge/pull requests – create a merge request so that other developers can take a look at your code and approve the code or request changes if necessary

Yeah, yeah, it is all nice in theory, but in practice who really uses TDD?

We do. After a story kickoff you sit with your other half of the pair for that day and you start working on the story. First you create a test, then implementation, then some more tests and so on. Of course, do not forget to refactor! Alongside unit tests, often you need to add integration tests and/or end-to-end Selenium tests to check if everything is ok when user works with the web application on a browser.

No way am I going to work with some random dude all the time!

IMHO, working with people is sometimes the hardest part of my job, but also, working in pair has more benefits than disadvantages. Yeah, it can happen that you aren’t so fond of the person who is pairing with you that day, but that happens rarely and you switch pairs every day. Sometimes discussions can take a lot of time, but in the end, they also teach you how to defend and explain why you do what you do & to develop communication “soft” skills. Also, I learned a lot about how other people think & work. With pair programming you get a chance to share knowledge a lot, because you are literally working next to another developer on the same thing. In the end you do not escape responsibility. You do share it with your pairs, but if you are the story lead, then you should ensure that it is implemented as best as possible in the code and functionally.

TDD & pair programming often work as shown in this simplified scenario:

Developer A: Oh, this story is easy like we talked on the kickoff! Let’s add a method to the service to search for files that have status “ACTIVE”.

Developer B: Great, we are working according to TDD so let’s start with a unit test! We should add a test that creates two mock files. First one should have “INACTIVE” status and the other file should have “ACTIVE” status. This test will then call our method and assert that it returns a list with only one “ACTIVE” file.

Developer A: Nice, I’ll do the test and you can do the implementation and after that we switch?

Developer B: It’s a deal. I’ll help you by watching and commenting and I expect from you the same when I’ll be writing code.

Hopefully our two developers created some quality code with good test coverage. But even if they didn’t, they still have to pass the proxy check and merge request reviews. 🙂

Conclusion

After working for more than a year according to TDD and in pair, some of my opinions on those subjects have changed:

  • TDD changes the way you think and work. The ability to create a test before the implementation proves that you understand what needs to be done. With each test you improve and refactor your code, and tests ensure that the code does what it was intended for.
  • If you are working on a big project, it is hard to know everything and working in pair speeds up the development.
  • Two people usually do the discussions and explanations out loud and that maybe wouldn’t happen so often with “solo” development. This leads to better knowledge sharing, code quality and edge case test coverage.
  • I met some amazing people and each of them has their own unique way of thinking. I learned a lot from them – sometimes it was the use of some new IDE shortcuts, sometimes it was how to think in DDD way and sometimes we learned together about the project and technologies like Java language, Spring framework or Hibernate.
  • I’ve found my preferred way of how to pair program and, in my experience, I remain most focused if I switch roles based on test/implementation cycle.

I hope that this blog has given you some idea how TDD and pair programming look in practice. If you have more questions about it, feel free to send me an email at: luka.sekovanic@tacta.io.

Author: Luka Sekovanić

Organizing software development

Organizing software development

I’ve worked in software development for 20 years. Through those years, in the context of dozens of different projects in different industries, I have witnessed the evolution of software process organization from Waterfall, to RUP until nowadays, agile methodologies.

What is so challenging and difficult about software development processes that sparks so many discussions and even requires people and organizations who make a living by simply advising on the matter?

Well, my answer is – quality of the people. Second thing is – size of the team and the third thing – leadership by example.

 

I will start with an example of a project I once worked on back in 2004. Five guys (me included), were assigned with a task to develop a new worldwide part ordering system for a client in the automotive sector. Existing database (mainframe) was there. The requirements were there. We had no process at all. Our lead architect was developing with us, the analyst was sitting at a table next to us. The manager would come from time to time with a demand for the next delivery, we would all listen to him at that moment and return to work after that. Everything on that project went perfectly smooth and in time and budget. How would we call this? A success story that was agile even before the term was even used? Could be.

But an even better answer is that this was a project where only people with enough common sense and ownership were involved. And there were not too many of us. The guy who knew everything about the domain really cared about the business and was coding with us.

The following is an example of another extreme. A project with almost 100 developers, many managers, testers, analysts – all paid from a huge government budget. In the end, it did not deliver anything useful and huge amounts of money were spent. Many experienced process organizing experts were paid to come and help. It simply did not work. Ownership was not encouraged; common sense was lost long ago. The size of the team compromised both ownership and common sense in a very short period.

Why was the team so big, and why did we not move forward? The answers to these questions are specific each time something like this happens, but the rule is to never get in a situation where you need so many people working under the same deadline – unless the type of work you are doing can be done by a well-written script – there are products these days that generate source code for that type of work.

 

Common sense. Ownership. Size of the team. Lead by example. Agile concepts are nothing more but a derivative of these few terms, written by people who were lucky enough to spend time in such environments.

Why are we unable to ‘stabilize ‘software development in a sense that ‘throwing’ enough people at it will always solve the problem in an amount of time that is proportional to the team size increase? To make a comparison with another industry, let’s say that I am about to build a huge building and the architecture is already worked out in detail. My thinking is that having 100 construction workers will probably be 40 times faster than building it with 2 of them. I would lose some efficiency in the overhead of organizing all those people, assigning them to their next task in an optimal way – but it surely would go way faster. In software development, that is not the case – things could in fact even get worse – and the question is why.

Developing software means – making something new, something yet not developed. Otherwise you probably wouldn’t even get the assignment to build it. Or, if someone else has developed something similar before – this time it needs to be rebuilt, slightly differently and with entirely new technologies, in a different context, and for a different client. In other words, each time a software endeavor is about to start, it requires some ice breaking. Ice breaking is not done with hundreds of ships that know how to sail (knows “how to code”), it is done by a limited number of ships that are skilled in ice breaking. Only after, and not before, the sea is open for sailing, you can put hundreds of other ships, with a simple purpose of traveling from point A to point B. But, by the time the sea is open for sailing, you realize that in fact there is no need for hundreds of ships at all.  We realize that we were in fact mislead by someone telling us that having our sea cleared out of ice will need years and hundreds of ships to accomplish.

 

Actual coding – sailing in the open sea with some ice blocks floating around – is the easy and fun part of any software project. Good ships (developers) are needed to navigate and avoid those remainders of the floating ice. To tackle them by recognizing and challenging badly written requirements, by writing well designed and readable code, by making a test for every change of the code in order to avoid hitting any ice ever again.

Heading towards the exact same destination is the next challenge, a challenge that we can overcome only with perfect communication, that is, a team small enough to think and talk alike.

Good software development is about having a small team of people with willingness (ownership) and capability to understand what the business actually needs (create and implement well written user stories). It is about making sure that the chosen technologies will always work in cohesion (continuous integration); that the whole thing will actually deploy on the required platform and that the delivered solution will effectively be maintainable in the long run (use test driven development).

 

Author: Sejo Ćesić      

 

 

TACTA Internship

TACTA Internship

In December 2018, we started a .NET internship for the first time in Tacta. We had some experience in mentoring, but not in organizing an internship that simulates a real working environment. Having little experience in this area, but led by the idea that our principles should be applied to the internship, we succeeded!

We wanted to provide interns an experience of a real agile environment that includes daily scrums, planning, and retrospective.

We also wanted the interns to get familiar with the best practices in our company such as pair-programming, test-driven development and domain-driven design.

It was a challenge to incorporate all these things into the internship, but it was also exciting. Together with the interns, we co-created the internship. It turns out that we learned a lot through the whole process, but most importantly, we got a chance to meet and spend time with great people.

 

We asked the interns to tell us what they think about the internship, and here is what they said:

 

“After spending almost three years of my life attending college lectures and mostly learning about theoretical subjects in the IT world, I felt it was time to put everything I learned there to a test in a more practical environment. My internship at Tacta turned out to be exactly the challenge I wanted.

Working in a team and experiencing first-hand the marvels of pair-programming really gave me a different perspective on programming, and how much more effective and fun it is when you have an extra set of eyes focusing on the task at hand. Consistent code reviews and a culture which encourages everyone to speak their mind and constantly ask questions were something that really made a huge impact on how fast I was progressing and learning stuff that I thought would take me a long time to learn.

The domain of knowledge which this internship has instilled in me was broader than I imagined. Learning the ins and outs of .NET Core and Angular quickly became something that is fairly trivial, the real challenge and, the most interesting and important aspect (to me) was learning about and implementing test-driven development along with trying to grasp some of the domain driven design principles. That definitely felt very unique and that’s where the technical aspect of this internship really shines through.

Along with Tarik (our mentor), it really felt great that everyone at Tacta was more than kind and always willing to lend a helping hand, and made us feel like we really were, for those three months, a part of a family.” – Nermin

 

 

“The reason I chose this internship was the technology that was advertised, .Net. We focus on .Net at the Faculty, and I thought, it could be nice to learn how technology is used in a business environment. Yet, the reasons why I loved the internship had little to do with the technologies itself but with the “philosophy” I was taught. We were advised more on how to think when working, what was a proper way to build a system, how to communicate with your colleagues, clients and superiors, basic principles of DDD and TDD. Also, I’d like to mention that the atmosphere in the company was nice, we were able to talk to almost all employees, including the CEO himself and it was nice that our mentor, Tarik, allowed us to organize our time and work among ourselves. I like to think that I improved a lot at Tacta, as a developer, and as a team member. This concerns both the technologies we used and the way of thinking in a business environment.” – Selim

 

“I’m not sure where to start in describing my journey as a Tacta intern…

Why did I apply? What did I expect? To be honest, like many of my friends at the faculty, at one point in time I was aware that school projects and other faculty-related stuff did not reflect real-world programming. It all looked fake and unreal in my eyes. I knew I had to expand my knowledge and explore other ideas and concepts. I wanted to meet the people from the industry and try to see how they tackle everyday programming challenges. To see programming beyond “just making a program work”. This internship was definitely an eye opener for a lot of things stated above.

From the first day of the internship the learning process started and, really, it never stopped. In a matter of days, I found myself working in the team of young people, tackling challenges and discussing ideas. I would like to emphasize the discussion part, as we discussed pretty much everything. From the basic ideas to the problems that we encountered. There were no stupid questions. A big part of this discussion was definitely our mentor Tarik who followed us through our journey and made sure we were right on track. Soon I saw a lot of things that you only read about in school, put in practice. Pair programming, DDD, TDD, clean code and so on. It was the first time that all these things made sense and before I knew it, I wasn’t just thinking of how to make something work, I was thinking of how to make it the right way. Probably the greatest revelation for me was acknowledging the importance of non-technical skills, or as people like to call them these days – soft skills. We had to learn to communicate correctly, listen actively, present our ideas clearly, manage our time and so on. Not really the stuff that first crosses your mind when talking about software development, at least not in the eyes of developers.

I could go on and on about all the great people I have met and other cool things that I have learned, but I believe that you get the point that I’m trying to make. All in all, I can really say that Tacta internship was one heck of a journey that I recommend to everyone.” – Dejan

 

“After completing lectures at the Faculty, with only one exam left to complete my education and get my Bachelor’s degree in Information Technology, I realized that I would have plenty of free time that I should use in the right way. Then I heard that Tacta offers internships for students. I was thinking that an internship could be the first step towards becoming a developer and a starting point for my career. So, I applied and, fortunately, they recognized my potential and desire to learn. The internship lasted for 3 months and on the very first day we entered the real dynamic world of the IT industry. We were in a group of 4 and we had a mentor. Our mentor and other employees in the company were always available for all the questions we had and were always willing to share their knowledge with us. Having the opportunity to work in an agile environment was a new and very interesting experience for me. At the Faculty, I learned what standups, sprints, sprint retro and all elements of agile methodology are in theory but through this internship I had the opportunity to experience it in real life. Also, I had a chance to experience pair programming technique. It was difficult in the beginning but from day to day, our communication skills were improving, and I realized that the development process becomes faster because two heads are better than one. We used TDD and writing a test for each functionality helped me figure out how testing is an important part of any software development methodology. It is required to detect bugs and defects before the final product is delivered to the client and it is an essential component of successful development projects. We worked on a project from start to finish and learned everything that is needed for web development and more. The internship covered every bit of the software development process and taught us how to become full stack developers. In just a few months I learned how to create backend and frontend parts of the application in some highly popular frameworks. By completing each assignment, the mentor had taught us the best practices of coding. For the backend part of apps, we developed a RESTful web API in C# and .NET Core. By making a frontend part, I stepped into the amazing world of JS frameworks which I had no previous experience of. Using Angular and Material Design, we built an interactive and dynamic application that looks very nice. During the internship program I became familiar with a lot of helpful things that every junior developer needs to know like using git, design patterns etc. In addition to technical skills, I have also improved my soft skills and this internship gave me a lot of confidence and courage to step into the world of IT and look for my dream job.” – Lejla

 

 

 

 

 

 

A Decade of DDD, CQRS and Event Sourcing

A Decade of DDD, CQRS and Event Sourcing

For those that haven’t really moved past the blue book…

DDD for many, including me, brought back the enjoyment of software development. Implementation becomes easy when you break down the domain. The bits and pieces suddenly start to “fit” together in a way they did not before, and the implementation itself becomes straight-forward which results in a simple, maintainable and easy to understand code that will outlive the development team itself.

DDD has come a long way since “the blue book”, but in my experience, not enough people realize that DDD is a growing, evolving thing and that it has indeed learned a few new tricks along the way. This blog is a brief recap of some of the notable things that happened in the DDD community during the last decade or so.

 

Can I DDD?

First things first. What do you need exactly to successfully implement DDD? What are the prerequisites without which it wouldn’t be feasible?

According to Eric Evans, there are two main ones:

  • Iterative development process
  • Access to domain experts

Iterative development, in my opinion, is a life force of DDD. It enables experimentation, exploration, and calls for active refactoring of the problem domain as you continue to gain more insight alongside your domain experts. Additionaly you make use of tight feedback loops which involve feedback obtained through the actual usage of the system that is being constructed.

When you really come to think about it, what are the odds that you have discovered the “best” model for your domain the first time? Even the first couple of times?

The chances are pretty slim. Especially so due to the fact that there is no such thing as “the perfect” model for almost anything that you might be modeling.

Domain model is a living thing. It grows and evolves over time through active discovery and is never really done. It only gets to be “good enough” at best. Which is totally fine.

One of the significant mistakes we all do is, slipping towards perfectionism. As we already made clear, DDD depends on iteration, so don’t get caught up in the details too early. Do the first prototype quickly, then get to the second one quick etc…

“Perfect is the enemy of good” and perfectionism prevents you from doing enough innovation.

With that being said, don’t settle for the first useful model you may encounter. Keep iterating and always rigorously refine and keep watching for even the most innocent looking workarounds. They are an indication of a non-optimal model, and it’s almost certain that you have missed a modeling opportunity somewhere along the way.

Iterative development approach makes heavy use of domain experts. It’s hard to have one without the other, and missing out on any of those will most certainly make your DDD efforts futile.

At this point, you might be thinking: “Well, what about DDD lite?” It provides me with a lot of useful abstractions and modeling tools. Couldn’t I get away by just using the tools and abstractions that DDD lite approach provides?

Sorry, but, the answer is no. DDD Lite can only get you so far due to its nature. It provides a set of modeling tools/building blocks (e.g. entities, repositories, value objects etc…) which will help you implement DDD itself! By only using DDD lite, you actually miss out on DDD altogether…

What use are all of the abstractions and modeling tools, if you don’t have a good idea of what you are building, or even worse… If you are building the wrong thing?

 

Explicit context boundaries and the Core Domain

Focusing on the Core Domain, as Eric Evans puts it, is a game changer.

Focus your DDD efforts on your Core Domain. The stuff that really makes your company stand out from the competition. The thing that gives you an edge and a competitive advantage on the market.

Companies can waste so much time, money and effort by reinventing the wheel and applying DDD to the parts of their domain that could have gotten away with a more simplistic approach or even be replaced by existing, off the shelf solutions.

But, in order to identify your true core domain, you will need to define explicit context boundaries through any of the context mapping techniques (I suggest you look up Event Storming, more about it later…).

With all of this being said, it takes a certain level of discipline to keep the  bounded contexts separate, but it yields great benefits and almost any project whether it makes use of DDD or not, big or small, can benefit off of context mapping and having explicitly defined context boundaries which will separate really important parts of the domain, from the less important ones, and will ultimately help you identify your Core Domain, and that’s where most of your DDD efforts need to be directed.

Context mapping and the big ball of mud

What do you do if you are dealing with a legacy system, a big ball of mud? How do you get a taste of DDD there (assuming you still can employ the iterative development approach, and have access to domain experts)?

Well, just because the legacy system i.e.. “the big ball of mud” exists, it does not mean you have to keep cramping it with new features, but rather, employ your context mapping techniques here. Draw a line around it and say, “this is my big ball of mud”, and after that draw a line around your new service and treat it as a separate bounded context.

As Evans puts it, it’s probably inevitable that your service will get enclosed by the big ball of mud eventually (since it will eat almost anything), but at least you had a nice run for a time.

 

A word on DDD building blocks

Let’s touch upon DDD lite again quickly.

Building blocks (Entities, Value Objects, Factories, Repositories …) are overemphasized! — Eric Evans

Yes, you heard it right. Building blocks have gotten too much attention, but don’t get me wrong, they are still important and provide a great value. Building blocks are what they are. They are a means to an end, mere implementation details that help you implement strategic DDD patterns.

As Evans stated, he regrets putting the strategic patterns way back at the end of the book, which probably resulted in many people giving more importance to the tactical patterns because they come first or even worse, they get so caught up in the intricate implementation details of tactical patterns, they don’t even get to the most important part of the book.

The thing to take away is that building blocks provide you the tools to implement DDD, but you should give much more focus to strategic patterns, even more so because tactical patterns/building blocks will continue to evolve, some will become obsolete, new ones will be added (e.g. Domain Events).

Aggregates

Aggregate represents a conceptual whole in the domain that is also consisted of other small parts (Value Objects and/or other Entities) and protects an always consistent invariant across all of them.

One question that pops up often is a concern that a lot of people have regarding the awkward cases where their aggregates have an invariant that crosses thousands of entities.

Since we can access a child entity only through its parent aggregate, do we load all of them each time we load an aggregate? Do we make use of lazy loading? Do we model this differently even though it’s a business invariant? If yes, what’s the correct model for this?

The bottom line is that OO is really not good at handling collections of objects, especially very large collections, which calls for loosening and “bending” the rules a bit in these cases.

The same applies if you have a small number of aggregates but have a lot of concurrent users that might be interacting with the aggregate at the same time.

In cases like these, you might consider modeling those entities as separate aggregate(s) and try enforcing the business invariant on a higher level. For example, in domain services, process managers/sagas etc…
In short, make the consistency an explicit concern of your domain instead of it being solved implicitly through the infrastructure.

As it turns out, another potential solution is related to another question/concern that keeps popping up when working on a project that makes use of CQRS as a standalone pattern or in a combination with Event Sourcing.

Can write side query the read side?

According to Greg Young, the answer is
yes, absolutely, there are cases when you simply have to”, and one of those cases is related to the challenge we mentioned here.

I’d argue that even if you don’t face the aggregates/entities problem we describe here, there are a lot of cases where it’s OK to query the read side (I have certainly done it more than once).

It’s fairly easy to spot these opportunities because in majority (if not all) of the cases they tend to present themselves in a form of a specification pattern, but since you are using Event Sourcing or CQRS as a standalone pattern, you can’t really make use of them.
But luckily for you, specifications and CQRS are two competing/interchangeable patterns.

Rule of thumb would be to aim for very small and specific projections which can provide you with a very specific answer to a very specific question which would be very cumbersome to implement by querying domain models.

Vaughn Vernon has a lot to say about aggregate design.
Check out his two-part Aggregate Design paper:
– Effective Aggregate Design Part I
– Effective Aggregate Design Part II

I also highly recommend you check out his DDD book:
Implementing Domain-Driven Design

 

DDD in a modern “always on” world

Software applications outside of the enterprise world, in general, have quite different requirements in terms of performance, latency, responsiveness, and scalability. There was a fear that applying DDD patterns to these kinds of domains would not really be feasible due to the aforementioned constraints and the overhead that OOP with DDD applied would introduce.

This might have been a great obstacle to widespread DDD adoption, but luckily, applying DDD to these kinds of domains gave birth to a new approach (the ideas were there for centuries actually) towards applying DDD under the name of Event Sourcing and CQRS.

In short, Event Sourcing and it’s complementary pattern, CQRS gave us an extravagant revamp of DDD building blocks, and a way to employ DDD patterns through the use of Domain Events as first-class citizens and the sole source of truth in these kinds of systems, while at the same time allowing us to satisfy high throughput / high availability needs of those kinds of systems, by enabling us to scale reads and writes separately, and employ inter-service integration via Domain Events.

Event Sourcing also provides a stepping stone towards implementing DDD in a functional world, due to the immutable nature of its event streams (An aggregate state is a left fold of all of its events).

But meeting scalability demands of large distributed systems is not the only benefit of employing this kind of event-centric approach towards modeling our systems.

There are a number of other benefits of employing an event-first modeling approach:

  • Explicitly modeling important domain events and formalizing them, forces domain experts to think in terms of the behavior of their system instead of in terms of its structure. This especially helps, since people tend to think in terms of their legacy systems, instead of focusing on the problem they are really trying to solve.
  • By modeling events, and focusing on the behaviors instead of nouns, even the domain experts get a different perspective on their domain and gain additional insight.
  • Event modeling forces temporal focus and makes time become a crucial factor (which it is)

If any of these resonate with you I encourage you to check out Event Storming by Alberto Brandolini. Event Storming employs an event-centric approach in order to distill the domain.

I won’t go into detail here, but I will just mention that Event Storming has a number of different flavors you will find on the site and the book but, I would like to mention one more variant by Greg Young.

In his variant, you basically just take one single long-running business process end to end and model it using Event Storming in order to discover your service boundaries. This worked very well for me.

 

Event Sourcing / CQRS misconceptions/pain points

During one of his talks, Young focused on some recurring pain points/misconceptions that were coming up repeatedly regarding Event Sourcing and CQRS, and offered clarification and advice on how to approach these …. Here is a short recap.

 CQRS is not a top-level architecture!

CQRS is a supporting pattern and you need to treat it as such. Don’t go crazy! Instead, apply it selectively to a few places.

 Commands must return void?!

The bottom line is that it’s not about return values, it’s rather about side effects? It is perfectly OK for commands to return the list of errors example, instead of relying only on throwing exceptions (which is a bit of a bad practice anyway).

Command vs Domain Event is not strictly a one on one relationship!

An event does not necessarily have a corresponding command, nor does a command have exactly one corresponding event being published.
It is important to understand that there are always two sets of use cases. The commands coming in and the events coming out.

An event is not strictly a result of a command coming in. In event-centric architectures, this is commonplace to see, and the whole point of it is that events cause stuff to happen. A business process that resulted in publishing a domain event is often time triggered by another event without any commands.

There is no such thing as one way / async Command

The whole point of a command is that I have the ability to tell you No!

“Async” commands don’t really give you that option, you just fire and forget, which is kind of defeating the purpose. They do not work well in the real world.

By accepting a command, it should mean that you validated it, you can execute it, you processed it and it’s done. Otherwise, what you really want is a downstream event processor.

Don’t write a CQRS / Event Sourcing framework. Period.

I think we need to start realizing that you do not need a framework for everything. Frameworks have their place, but we need to put an end to framework-first mindset and instead, try to solve our problems with focused modules/libraries instead of relying on almost always “too generic” frameworks which tend to sprawl their tentacles all over our code base.

We need better examples!

As Greg Young puts it, we need better examples. Event Sourcing is hard, that’s a fact, and simple examples like simplified shopping cards don’t really do it justice.

Recommended training material:

Author: Anes Hasičić

 

This blog has also been published on Medium