Tacta Academy: How I started coding & finding myself as Quality Assurance Engineer

Tacta Academy: How I started coding & finding myself as Quality Assurance Engineer

This is a story of how I started coding and kept on learning and growing as a Quality Assurance Engineer in Tacta. Learning to code is tough, it can take a long time and I’m hoping my confession might help others find motivation, so let’s dig into it!

The first coding experience

The first time I tried to learn coding was in Java and it was in 2014. Back then (being a smarty pants) I thought I could learn coding with little effort since it was generally always easy for me to pick up new things. I had no idea at the time that learning to code takes a lot of time, practice, dedication and commitment –  hacking away at it constantly is much more important than just being smart. 

Being unnecessarily hasty in my learning I didn’t have enough patience for Java, so I did not touch it for the next couple of years! But, in the meantime, I did learn JavaScript and PHP – as I was into learning web development at the time.

Specializing in QA & test automation in Java

After doing web development for some time, I fell in love with testing or QA. It was when I realized that testers can also code and that manual testing is there to complement the automation testing – they go hand in hand, to do automation the right way you need to understand the business logic. 

At my previous company I was a Junior QA (Quality Assurance) and I was just getting acquainted with test automation in a real world project – prior to that I was doing only manual testing in my first company. There were online resources at the time, but a lot of them were either outdated, not using all the tooling I was supposed to be using at work and the Test Automation University didn’t exist at the time.

Removing obstacles

I wanted to focus more on doing automation in Java, but what was working against me was the fact that I was working on several projects at the same time (three to four projects usually, as I was a bench consultant) which made focusing on the project with Java automation more difficult. 

I thought I had to learn everything at once putting too much pressure on myself, instead of learning  incrementally in small manageable chunks.

Yet another obstacle was the fact that I was preparing for automation with C# NUnint, Visual Studio, etc. – as the company was mostly using Microsoft technologies for everything, with the sole exception of the small Java project I got assigned to. Seeing a finished test automation framework which was made by people with 5-10 years of experience was also pretty daunting, I knew the basics but, to me, the framework seemed scary and overengineered – now when I look back the “scary” framework makes perfect sense, it was built using SOLID principles and the tests were following the DAMP paradigme.

SOURCE: System32Comics

Additionally, I thought I had to learn everything at once putting too much pressure on myself, instead of learning incrementally in small manageable chunks. 

QA in Java at its best in Tacta

When I truly matured as QA and rediscovered my love for Java was in Tacta.

I was assigned to do testing automation on an API-based product, and the backend product was being made using the Spring framework, built in Java language.

After several years of experience in testing automation, I was ready to do it in Java. Having previously worked with a similar object oriented language (C#) helped a lot – this time Java was making perfect sense. I was pretty comfortable using Postman and since I had a pretty good grasp of the HTTP protocol this time there was no panic. I was creating an API test automation framework and I realized there is nothing better for that than Java. 

I had several reasons behind this Solomonic Solution: the first and most important one is that Tacta has many great and experienced Java developers who were practicing Test Driven Development and Pair Programming, meaning that I could learn from them and use their knowledge and support.

Being within a Java ecosystem, it made perfect sense to utilise the tools built in Java and for Java, such as IntelliJ IDE, JUnit, TestNG, Maven, Jenkins, etc. These are well established tools, highly customizable and a lot of them are Open Source!

With my colleagues’ support and experience, I stopped looking at a language from an emotional perspective and I took a gander at it from a more pragmatic point of view – which really paid off.

My takeaway for people who are learning to code and want to become Quality Assurance Engineers:

My main takeaways would be the following: 

  • Before learning any specific frameworks, learn the fundamentals of the language it’s built upon, that way you will not be using the framework as a crutch – you will know what’s going on beneath the hood and your code will be better for it. 
  • For test automation, learn design patterns as these will help you organize your code a lot better making it easier to maintain – it also won’t be brittle when making changes 
  • Use courses only to get the general sense of the technology you’re trying to learn – relying on courses and tutorials for tool long will give you a false sense of security, the instructor will be holding your hand along the way and you won’t face any real challenges – that is where we actually learn the most! So, start working on real project as soon as you can, either at your job or on personal projects
  • Don’t try to learn everything at once, focus on one thing at a time, make long-term learning goals and plan your learning (don’t be that tutorial-butterfly that just jumps from one topic to the next one). This way you will break large and complex topics into a much smaller one, which will make it easier to absorb knowledge which will in turn result in frequent and satisfying small wins, as suggested in the Dopamine-Driven Development
  • Also, don’t be a fanatical supporter of any language or technology stack, even your favourite one(s) since ultimately these are just tools of our trade which we use to get the job done – solve a problem for a customer so everyone involved can reap the benefits.
  • Use your colleagues experience and knowledge to learn how do to work smarter not harder

I hope this post might help people who are learning to code to (re)gain some of their much needed motivation – I know in my case articles like this one meant a lot, they still do! Thanks for reading!

Author: Mirza Sisic

Meet our team- Sandi Slonjšak

Meet our team- Sandi Slonjšak

Our Software Engineer Sandi told us about his first working day at Tacta, the best travel, hobbies and unforgettable challenges.

TACTA: Hi, Sandi, how long have you been with #TactaTeam?

Sandi: I joined Tacta on August 12th, 2019. You do the math how long that is.

TACTA: Could you describe your standard working day in the last couple of months?

Sandi: The morning starts with daily standup, making sure that everybody in my team has the current day planned ahead and that everybody is synced. After that, I will squeeze in a couple of meetings (business or architecture talks) so that I am done by lunch with talking too much (which I really like to do). After lunch, I will try to take a block for myself of deep-focus work and this is where I write most of my code. This is the time where I also like to do pair programming in order to assure that the solution is well-thought-out, tested, and implemented correctly. After that time I am in a helping hand mode, I will go to each teammate and see how I can assist them if needed. At the core of my day, there are a couple of simple ideas ➡ I want to inspire teamwork, lead by example, deliver as much value to our clients as possible, mentor people, and want everybody to learn something each day and progress.

TACTA: Now, would you tell us something about your hobbies?

Sandi: Huh, that’s always a tricky one for me because I like to try out as many activities and sports as possible. Usually, boxing/martial arts is my favorite pick, but most recently I find hiking to be really relaxing. I really like to socialize as much as possible, so drinks/dinner with my girlfriend, family, or friends is definitely the number 1 thing for Friday and Saturday night. I also like to read a lot and play chess (no, not because of Netflix and The Queen’s Gambit, I started playing chess when I was 5 years old).

TACTA: Which countries did you manage to visit and what plans to visit?

Sandi: I visited a lot of European countries, but I actually never left the continent, Monte Carlo and Zurich are my favorite places so far. As soon as travel bans are lifted I want to go to Singapore, Dubai, and back to Switzerland because there is much more to discover (I’ve only been to the German-speaking part of the country). If you ask my girlfriend she will probably tell you about more plans or some different locations, but let’s pretend that I am the one who is deciding on this.

TACTA: Which part of your job is the most exciting?

Sandi: Well, I love all aspects of my work, so I like the fact that I am involved in business meetings, architecture discussions, process design, leading a team, pair programming, and writing efficient algorithms. If you made me choose my favorite among those I would opt for architecture discussions, this is something that I find to be a perfect mix for me, a bit of creativity, exchanging ideas/experiences, and a sprinkle of engineering, what else could you ask for.

TACTA: What’s the most unforgettable challenge with you in the process of work?

Sandi: The biggest challenge started when Sejo told me: “We’re going to build a team around you”. So, I quickly went from being engineering-focused to a person that needs to focus on a team, projects on a high level, strategy, and then engineering. It really broadens your horizon and you have to adapt quickly, think about things that you didn’t focus on before, and orchestrate a lot of processes, people, and expectations. I really wanted to do this and I am really grateful for such an opportunity, I am loving every minute of it.

TACTA: What’s the coolest thing about working with #TactaTeam?

Sandi: Coolest thing… hmmm… only one? 😊 You are making me choose which is really hard in this situation. What about having a bunch of brilliant engineers around yourself who will gladly help you? Or the boss whose mission is to enable you to be the best version of yourself? Or cool projects that will challenge your skills? Family atmosphere and a great sense of humor? How about work and travel, all the perks that we enjoy on a daily basis? I really have a hard time picking, do you see why? Also, there is so much more – teamwork, pair programming, domain-driven design, solving katas, practicing test-driven development… We would need to grab a couple of drinks and spend a couple of hours just to go through all the cool things here.

Meet our team- Nikola Štuban

Meet our team- Nikola Štuban

If you are curious to know what Nikola Štuban, one of our Software engineers in Zagreb answered during our latest interview #MeetOurTeam then check out the full blog post. 


TACTA: Hi Nikola, please introduce yourself in a sentence.

Nikola: Hi, my name is Nikola, nice to meet you!

TACTA: Now you’re part of our team in Zagreb but could you tell us more about what’s your background in IT industry?

Nikola: I started my professional career 5 years ago during my college as a junior java developer in Corvus Info – a company that is part of a bigger MSAN group located in Zagreb, but also has its offices throughout the SEE region. Primarily, Corvus Info is making our everyday lives better by making it easier for customers to pay for goods and services online. Following that goal, they developed CorvusPay, the most popular internet payment gateway solution in Croatia. I really enjoyed having an amazing opportunity to work on such an unbelievably sensitive piece of software, that also needs to be very scalable and highly available, and at the same time does financial transactions, communicates with the banks and card processing software, has complex architectural solutions, etc.

After more than 2 years in Corvus, where I met a lot of fantastic experts and friends, I joined the TACTA team at the beginning of 2017. Although working on payment software in Corvus was a great experience, I was always seeker for a closer contact with the people we are working for – clients of all sorts, stakeholders, but also the end-users. In my opinion, there is no higher award at the end of the project than seeing a happy, satisfied client whose life is easier and better because of the software we delivered. I think you will agree with me that software can be (hypothetically speaking) perfectly written, but if the client at the end feels like that’s not what they were seeking – we have not fulfilled our mission.

Being a bridge in overlapping that gap speaking with the clients and understanding their needs, while at the same time using best practices and technologies in development, plus understanding the whole picture of how the specific system works – that was always my ideal environment. That is exactly what was made possible for me in TACTA. I had that opportunity the first day I started working here, and I’m still enjoying it every day.

TACTA: How did you cross paths with Tacta?

Nikola: I found TACTA software engineer advertisement randomly, and it seemed interesting – so I applied for the job. After figuring out that one of my colleges from the previous company was already working here, I decided to meet the team from Zagreb and Sarajevo. The rest is history.

TACTA: IT industry is growing rapidly, do you have any advice for Software engineers to be?

Nikola: I could give a million different advice here, but the most important things (even they might sound like a cliché) are to act naturally, to be kind, professional, and not to be afraid of asking questions. Just ask anything you don’t understand, without fear. I will also mention a quote from Vlad Mihalcea (Java Champion): “The best teams I’ve ever worked on were not made of 10x developers who knew everything, but of developers that were kind, funny, humble and professional.

TACTA: Corona changed our daily routine for sure but could you tell us what your typical workday looks like before and during the Corona?

Nikola: I have to say that my typical workday is not that different at this moment than it was before the COVID-19 pandemic. True, we were in sort of a lockdown during Spring this year when we all worked from home, but since June we are back in the office and more or less we continued to keep up with that practice. Sometimes we all decide to work from home for a couple of days, but that’s it. We will see what Autumn and Winter brings in terms of the pandemic, but if work from home 5 days a week will (again) became our reality, I am not concerned at all, since we already proved the ability to work extraordinary in those conditions earlier this year (and also developed some amazing apps completely in the lockdown period).

TACTA: Back to the work. Tell us what’s the most interesting challenge you’ve come across since you joined our team?

Nikola: The most interesting challenge to me is, as already mentioned, to overcome the lap between the clients with their requirements and software developers on the other side while using best practices and methodologies in the profession. I strongly believe that those two sides, if approached correctly, are not opposed, but complementary. In TACTA I had an opportunity to learn that from the best people in this area, and I still have a lot to learn.

TACTA: What’s your favorite desk snack?

Nikola: Normally I avoid having snacks on my desk since it usually ends up in peanuts all over my keyboard or monitor covered with ice cream.

TACTA:Share the secret, what’s your hidden talent?

Nikola: I would like to keep it hidden.

TACTA: When you’re not at Tacta, what’s your favorite thing to do?

Nikola: I am the father of two beautiful kids (2 years old son and 2 months old daughter), so when not occupied with work, I am trying to spend as much quality time as possible with them and my wife.

What did math teach me about business analysis

What did math teach me about business analysis

“The essence of mathematics is not to make simple things complicated, but to make complicated things simple.” –  S. Gudder

When we say the term business analysis, usually the main associations are exhaustive documents about business models and data analytics, and sometimes, if you allow yourself to be a bit exotic, you will think about a few diagrams in form of graphs or flowcharts (how wild, right?).

In my few years of experience as a developer, I was invested in solving problems and, as almost every developer, was addicted to the daily process of logical thinking and the general feeling of accomplishment once you solve a problem, however trivial or not it may be. When you think of business analysis, somehow you don’t get the feeling that you’ll be invested in a lot of creative solution design.

When I decided to change my career, I was worried – will I know how to live a day without these things? I spent years developing logical skills, mathematical skills, growing into an engineer and now I found myself thinking – will all those skills go to waste?

However, I’ve found that approaching business analysis can be quite an interesting mental exercise. Contrary to being a developer, where problem-solving is your main occupation (a very general phrasing), by being a business analyst your main occupation is defining the problem.

Once I realized that I am not that far away from my usual thinking process, my fetish kicked in.

SIDE NOTE: I know the popular phrasing of problem in IT nowadays is CHALLENGE, however, being a mathematician at my core, I am not scared of the word, since I solved like a billion of them in my freshmen year only, therefore I will not adhere to the modern standards in this part.

A problem is mainly and only an equation – meaning that, in most of the cases, it is solvable. Not only is it solvable, but it is also solvable in multiple ways and can have even more than one solution. Isn’t this comforting to hear? So, no need to use the tacky word challenge.

In the following text, I will try to compare certain aspects of business analysis to some typical mathematical aspects. So let’s start with some basic math.


The n + 1 proof

At the very beginning of your abstract mathematical journey, they teach you that, to prove a certain theorem (or in plain English – a statement) you need to prove that it is true for all known cases in the world (so, for every n that belongs to the set of N, which is the set of all-natural numbers).

Ok. To actually (mathematically) prove that, you need to find a (creative) way to show that it is true in the (n + 1) case. What is the (n + 1) case anyway?
Well, since you said that it should be valid for all n from the set of N, and the number n can reach infinity, your claim should also be valid in case your buddy n decides to level up to infinity + 1.

If you’re already a bit lost, no pressure. I have a shortcut.

Since all of this mentioned above seems like a lot of hassle, I’ll tell you what I do. You don’t need to go through all those cases in your head – you only need to find one that doesn’t work. Just one n, to put you out of your misery. One n that says your statement, well, is false.

In business analysis, we call this the edge case.

Whenever contemplating a certain business case, either with the customers or together with the team, once someone proposes a possible solution, I immediately rewire my brain from trying to find a solution to trying to break a solution mode (I do this to my proposed solutions too).

An edge case is not only an exception – it is a proof that your proposed way just does not work for all n from the set of N. In the math world, but that is also a deal-breaker, and to be honest, I wish it was the same in designing IT solutions. However, in IT, sometimes we accept solutions that only work in most cases, but afterward we scratch our heads when the unthinkable happened – the edge case has emerged in Production.

Mathematics dwells in perfectionism and we certainly cannot transfer the same rulings to the real world. However, as I mentioned above, a problem can have multiple solutions, and let’s try and find that buddy n that does not break once an edge case happens.


Suffering through solutions

One time during my studies, a few days after attempting an exam, the professor approached me while I was standing by the elevator.

“I cannot believe what you did!”- he said.

The exam I took was mainly about solving integrals. I was satisfied after I gave in my papers, even thinking about a good grade. Of course, that lasted approximately 15 mins – everything fell apart once I talked to my colleagues about the solutions and figured mine seemed nothing like theirs.
However, sometimes in math, if you apply a different approach to a problem it might seem that you’ve got a different solution. You see, since the result is a function and functions sometimes do not look alike but BEHAVE THE SAME (so, they’re the same), I still had faith that my solution could be correct.
I mean, I checked it multiple times. Line after line. Page after page…

“What? Page after page? How long did your approach take? I solved it in like two rows with a simple substitution.” – said my friend after the exam.

Well, I did not. I solved the task but it took me 3 pages. 3 pages of raw mathematical calculus. Even the darkest math kinksters would shudder at that sight.

“Once I saw your task, I immediately thought it must be wrong. However, as your professor, I am obliged to inspect it. I was hoping that you would make an error at some point so I wouldn’t have to go through all of it, but no, you did it. You solved it correctly. But man, did I suffer through it!” – the professor said while storming off into the elevator, leaving me behind, probably because he could not bear the sight of me anymore. I understood since I basically put that man through torture.

To draw a parallel line here – imagine the professor being a client and me being a business analyst, where the task from the exam was his business problem – what would his user experience be? From 0 to 10, probably much below 5 (and it’s a stretch), if we map that talk by the elevator to a feedback meeting for instance. The product works, but the client is suffering while using it (2/10).

The point is, it is not only important that you find a correct solution to your client’s problems, but it is also important that you try and optimize their way of working. Every working solution does not mean it’s acceptable. If the client needs to click and scroll multiple times for information you know they’ll be using constantly, you need to rethink your design. As a business analyst, you are required not only to identify the problem and figure a way out, you are required to understand the product, but also all the possible pains of it.

Bonus tip: Use your development team as allies on this one, they are far more advanced in matters of removing nonsense from solutions.

Understanding by magic

One long summer during my studies, I attempted to learn mathematical theory for the very first time. For those of you who do not know how mathematical theory looks like – first question: how does it feel to be God’s favorite? Secondly, mathematical theory is just like a description of a theorem (statement) and its proof. Or counterproof. Only not in sentences, but in mathematical symbols. When you first see it, it kind of looks a bit schizophrenic.

Day after day sitting by the book, it just wouldn’t go in my head. Luckily, one of those days my brother came along who already graduated from a similar faculty, recognized my agony, and gave me some advice.

Little did I know, that to learn mathematical theory you first need to learn how to learn it. You need to learn it by heart, but with understanding.

Let’s focus a bit on the understanding part.

When you see a bunch of symbols, your mind naturally tries to disregard as much as possible of it, in order to preserve memory. But this won’t help you much in the understanding part. Symbols in the equations don’t appear by magic, so you certainly cannot even try remembering them by heart without asking yourself where does every single symbol come from? And why? It’s not magic, it’s math –  and math is precise and can always tell you where something comes from and what is the purpose of it. Once I realized that it made much more sense to me.

Looking at this from a project perspective – let’s say you’re dealing with a service placed somewhere in a complex IT ecosystem. Imagine that all the symbols mentioned above in math. theorem are little services that somehow communicate with each other inside this architecture. Applying my first method would mean you are greatly skilled in your service domain knowledge, but generally, you’re not sure where your service is actually placed in the ecosystem and what is happening around it.

As a good business analyst and business developer, you need to understand the purpose of all those services, when and how they communicate, and to what purpose. You cannot isolate yourself only to your own domain, since your service actually communicates to the outside world and intersects with the others. If you do, you are at great risk of designing solutions which are not true, not complete, and not satisfactory.

A problem can occur somewhere outside your service model, and sooner or later, it will become your problem too. It’s the same in math – one equation after the other, at one point you’re going to figure out that the symbol you so easily disregarded, now is the key factor to the solution.


To conclude, just like with business analysis described by exhaustive documents, you could look at math the same way. But why should you? Math was never meant to be that way, nor should the business analysis be. That’s just boring. Abstract thoughts of certain processes – be it the way your application responds to a certain API call triggered by an outside service, or be it the longitude of a tangent – in a similar way they coexist, trying and reaching out to you to find the most elegant solution.

Emina Džuzdanović

Implementing Event Store in C#

Implementing Event Store in C#

What is this all about?

When looking for examples of EventStore in Event Sourcing implementation, you will probably find a lot of them, very detailed in theory, but lacking practical implementation. In this article my goal is to explain how to implement simple, yet robust, event store, test it and what are the pitfalls you should expect based on my own experience.

Event store presented here will be implemented in .NET Core, C# and MS SQL LocalDB server as a database.

Full project which contains working solutions is available on my github

Short introduction to Event Sourcing

I will not go into much detail about what Event Sourcing is since this topic is covered in many books and articles around the web.

Instead, I will focus on what is important when considering implementing an actual Event Store. But, nevertheless I will quickly summarize the main idea behind Event Sourcing and main challenges.

Aggregates are considered as a whole represented by the Aggregate Root. Conceptually an Aggregate is loaded and saved in its entirety. (Evans, 2001).

Event Sourcing is a software architecture pattern which states that state of the application should be persisted as sequence of Events. Main benefits of Events Sourcing are:

  • Auditability — since state is constructed from sequence of events it is possible to extract detailed log since the begging and up to current date
  • Projections/queries — it is possible to recreate state in different time frame since we have all the events from the beginning. For example it would be possible to check what was the bank account state one year ago. This also allows to generate some queries/reports we never thought of when starting the system, since all the data is in the Events.
  • Performance when inserting data — since EventStore is append only store with optimistic locking, we would expect to have much less deadlocks(or concurrency exceptions) happening. Also, no long running transactions which insert graph of data in multiple tables.
  • Flat database structure — we would usually use 1(or max 2 tables) as event store. In both cases it would be de-normalized form with some form of weak serialization field to store payload of the event like JSON for example. This means that when adding new fields to the database it is not needed to add them to any table — simply adjusting Event and adding required field will save it into JSON. This allows much more rapid write side development time

As with every pattern we must be aware of limitations/challenges. If used incorrectly, Event Sourcing will probably cause more harm than good. So, the main challenges we should keep in mind are:

  • Storage Growth — since data store is append only, table will grow indefinitely. This can be mitigated using snapshots or retention policy strategies.
  • Replaying Events — if the amount of events for constructing Aggregate is large it might lead to some performance issue when reconstructing the current state of the aggregate. This can be mitigated by using snapshots.
  • Events versioning and events handling — when changing existing event, or adding/deleting features, a code which projected old events MUST be in place, since it is used to reconstruct the state to the actual state. This means that if some feature is deprecated, its code cannot be removed since it is used to reconstruct the state at that time. This challenge is a bit harder to overcome but it can be mitigated.

Event Store considerations

Requirements for the event store are the following:

  • It is append only which means there is no update only insert
  • It should store aggregate state and allow fetching events for given aggregate in order they were saved.
  • It should use optimistic concurrency check: Optimistic concurrency check does not use locking on database level, therefore reducing risk of deadlocks. Instead, concurrency check is done when saving.

Optimistic concurrency check

When inserting into a database, where multiple clients exist, it can happen that 2 or more clients are trying to modify the same aggregate. Since we don’t use pessimistic concurrency check, there will be no lock and no waiting, but the check itself would be applied when trying to persists the actual data.

To make things clear let us consider an example:

Assume that there are two requests that want to modify the same aggregate named Aggregate. Implementing concurrency check should be done on database level.

  1. Both of them will fetch current version from the event store which is 1
  2. First aggregate is saved, second aggregate version is set to 2
  3. Second aggregate in this case will fail concurrency check since its version is 2 and expected version is 3. This indicates that data has been changed since it was read. In this case saving second aggregate should fail with Concurrency exception.

Example of optimistic check using Aggregate version

Database schema

Event Store will be one table, append only, which allows version tracking per aggregate and implement concurrency check on database level.

SQL for Event Store table with JSON field for data(this is where event payload is serialized):

CREATE TABLE [dbo].[EventStore](
    [Id] [uniqueidentifier] NOT NULL,
    [CreatedAt] [datetime2] NOT NULL,
    [Sequence] [int] IDENTITY(1,1) NOT NULL,
    [Version] [int] NOT NULL,
    [Name] [nvarchar](250) NOT NULL,
    [AggregateId] [nvarchar](250) NOT NULL,
    [Data] [nvarchar](max) NOT NULL,
    [Aggregate] [nvarchar](250) NOT NULL

SQL for EventStore example table

AggregateId and Version are two fields used for concurrency check. We create Unique index with these two fields. AggregateId is the id of our aggregate and can be whatever we want(therefore it is defined as string). Depending on the domain it can be GUID, int, combination of two, it doesn’t really matter.

Note that AggregateId is defined as nvarchar(250)

CREATE UNIQUE NONCLUSTERED INDEX [ConcurrencyCheckIndex] ON [dbo].[EventStore]
([Version] ASC, [AggregateId] ASC) WITH (

Unique index is enforced on Version and Aggregateld fields on the table on database level

Using this we ensure that the same AggregateId/Version combination is never saved. Instead a unique index check failed exception is thrown by the database. This is a transient error, which means that retry mechanism(see Retry Pattern) can(and should) be implemented on the client side.

Example project short introduction

Project is built using .NET Core 3.1.

Project architecture is layered with inversion of control.


  • RestAPI — Web API which contains DTO and REST Controller definitions
  • Infrastructure — Factories, database model and repository implementations are defined here
  • Core — contains business logic as well as repository interface for aggregate. This project has no references to any other project or any other third party libraries( except Tactical DDD nuget which is pure C# code )

Other projects:

  • DbMigration — migrations projects used to initialize database
  • EventStoreTests — testing project, which demonstrates integration tests for event store

For Core business logic, there is only one aggregate named Person and two domain events:

  1. PersonCreated — this event is published when person is created
  2. AddressChanged — this event is published when address for given person has changed

How to set up and run the project can be found in readme file for github repository.

EventStore implementation

Let us take a look at the actual code that implements the event store. I will put only code snippets here, while fully functional project can be found on my github.

Interface for the EventStore can be defined as:

public interface IEventStore
        Task SaveAsync(IEntityId aggregateId, 
            int originatingVersion, 
            IReadOnlyCollection<IDomainEvent> events,
            string aggregateName = "Aggregate Name");

        Task<IReadOnlyCollection<IDomainEvent>> LoadAsync(IEntityId aggregateRootId);

The interface defines two methods.

  • SaveAsync method is used to persist Aggregate as stream of events. The aggregate itself is described as a collection of domain events, with a unique name.
  • LoadAsync method fetches aggregate, using AggregateId as param, from the event store and emits it as an array of events. This array can be used to load the aggregate.

IEntityId and IDomainEvent are both imported from a Tactical DDD nuget, which I strongly recommend for DDD in C#. Both of these are simple interfaces which mark EntityId class and DomainEvent class.

Let us analyze the actual implementation of these two methods:

Persisting Events

For persisting Aggregate into the EventStore we need 3 parameters:

  1. AggregateId — this is the id of the aggregate. In our case it will be class which implements IEntityId interface
  2. OriginatingVersion — this is the version of the aggregate that is being saved. Once saved version is incremented by one. As explained this is used in optimistic concurrency check
  3. IReadOnlyCollection<IDomainEvent> — this is the actual list of events that needs to be persisted into database. Each event is persisted as new row.

Implementing SaveAsync

Full implementation for this method can be found in EventStoryRepository.cs file.

First, insert query is created using provided parameters. For this, we use micro ORM Dapper which allows mapping parameters using @ notation.

var query = $@"INSERT INTO {EventStoreTableName} ({EventStoreListOfColumnsInsert}) VALUES (@Id,@CreatedAt,@Version,@Name,@AggregateId,@Data,@Aggregate);";
var listOfEvents = events.Select(ev => new
  Aggregate = aggregateName,
  Data = JsonConvert.SerializeObject(ev, Formatting.Indented, _jsonSerializerSettings),
  Id = Guid.NewGuid(),
  AggregateId = aggregateId.ToString(),
  Version = ++originatingVersion

Parameter name ( @Name for example) is matched with the object property and mapped. That is why in the next line, a list of anonymous objects is created with the same properties as defined in query.

Properties are:

  • Aggregate – this is a string name for an aggregate.
  • CreatedAt — is a date/time when the event has been created
  • Data — this is the event payload, serialized as JSON string. Complete event is serialized into JSON using provided jsonSettings.
  • Id — this can be any type of id. For this example I used Guid.
  • Name — this is the actual event name
  • AggregateId — this is the id of the aggregate. Using this field, events for given aggregate can be filtered.
  • Version — increments each time for given aggregate. Used in optimistic concurrency check.

List of events is then mapped to the actual query using:

await connection.ExecuteAsync(query, listOfEvents);

This line of code is using ExecuteAsync method from Dapper ORM which will map listOfEvents properties with the parameters defined in query string and create actual queries.

When persisted in this way, each event is persisted as new row in EventStore table, with the Data payload of the actual event. Here is how it looks like:

Each Event is saved as new row in database. Version changes per Aggregate and sequence is always incremental

When inspecting Data column this is the payload:

“$type”: “Core.Person.DomainEvents.PersonCreated, Core”,
“PersonId”: “d91f903f-3fb1–4b68–9a59-c1818c94f104”,
“FirstName”: “damir6”,
“LastName”: “bolic7”,
“CreatedAt”: “2020–02–20T07:24:54.0490305Z”

The payload of this event is actually mapped from PersonCreated event which is emitted when new person is created:

public class PersonCreated : DomainEvent
        public string PersonId { get; }
        public string FirstName { get; }
        public string LastName { get; }

        public PersonCreated(
            string personId, 
            string firstName, 
            string lastName)
            PersonId = personId;
            FirstName = firstName;
            LastName = lastName;

This domain event is published when new Person aggregate is created

DomainEvent class can be defined as follows:

public class DomainEvent : IDomainEvent

        public DomainEvent()
            CreatedAt = DateTime.UtcNow;

        public DateTime CreatedAt { get; set; }

Basically, CreatedAt is added based on IDomainEvent from DDD Tactical nuget.

Loading aggregate

Loading aggregate is done using AggregateId. For given aggregate all events are loaded and then Aggregate is constructed using those events which in turn results with the new Aggregate object in memory.

Aggregate is loaded using SQL query as list of events using EventStoreRepository.LoadAsync() method. The real magic happens when events are deserialized from JSON and converted to DomainEvent object:

var events = (await connection.QueryAsync<EventStoreDao>(query.ToString(), 
                                                         aggregateRootId != null ? 
                                                         new { AggregateId = aggregateRootId.ToString() } : null))

 var domainEvents = events.Select(TransformEvent).Where(x => x != null).ToList().AsReadOnly();
 return domainEvents;

Selecting all events for the aggregate and transforming them to domain events

As we can see, events are fetched as EventStoreDao list of object, which are then converted to DomainEvent using TransformEvent method:

private IDomainEvent TransformEvent(EventStoreDao eventSelected)
 var o = JsonConvert.DeserializeObject(eventSelected.Data, _jsonSerializerSettings);
 var evt = o as IDomainEvent;
 return evt;

Here, actual payload of the event eventSelected.Data is deserialized into object which is then converted to IDomainEvent interface. Note that, should this conversion fail it will return null.

Once list of domain events is fetched Person aggregate can be constructed.


Testing of the event store is not hard.

For unit testing it has defined interface IEventStore which can be mocked.

For integration tests – in memory database can be used. In the example project, LocalDB is used for both testing and in actual project. This is located in the EventStoreIntegrationTests.cs file.


The goal of this blog is to show using concrete example how to implement simple Event Store in C#. For this we used some of the DDD concepts like Aggregate, Repository, Entity and ValueObject.

Example project, included with this blog, aims to be simple demonstration of the principles defined here.

Author: Damir Bolić

Life after Event Sourcing

Life after Event Sourcing

I am not going to talk about implementing Event Sourcing, pros and cons, or when and where to use it. I want to share my personal developer’s perspective and experience gathered from the two projects I worked on. Both were aiming to develop microservices using domain-driven design principles. The first project (let us call it A) had Event Sourcing and the second one (project B) did not. In both cases, a relational database was required to be used for data persistence.

Project A produced a microservice running in production without major issues. After resolving a few challenges with projection mechanisms, we end up implementing a faithful domain model representing business processes, domain events holding important domain facts and well determined ubiquitous language spoken by the team and the business experts. When the team was assigned to develop microservice B, the same practices were carried over. But then, I realized that it would not go as smoothly as before.

Useless Domain Events

When I first heard Once you get the hang of using Domain Events, you will be addicted and wonder how you survived without them until now, it sounded a bit pretentious and exaggerated, but that is exactly how I have felt since I was introduced to Domain-Driven Design. In my opinion, the greatest power of DDD comes from domain events.

When using event sourcing, everything is spinning around domain events. Working on project A could have been described as talking, thinking, storming, modeling, projecting, storing, trusting and investigating domain events. On the project B, however, we were saving only the latest state of the aggregate and were conveniently deriving the read models from that as well. So, no storing and no domain events needed for projections. To make things worse, there was nothing needed to be subscribed to the domain events the aggregate is publishing. Of course, we decided not to use the concept of domain events, but as a consequence, we had to change the mindset we were used to and continue with a weaker DDD.

Painful refactoring

By gradually expanding domain knowledge, continuous refactoring and adjusting of the domain model were the common practices during development of the microservice A. With event sourcing one can completely remodel the aggregates without having to change anything in the data persisting mechanism. Aggregate properties are just placeholders being filled with data loaded from stored facts — domain events. But when one is storing only the latest state of aggregates, every change drags adjusting database, altering or creating new tables or migrations of existing data. Even though we had created suitable data access objects mapped from the aggregate, changes on the database or in the mappers were inevitable and often time-consuming.

Less focus on the core domain

It did not take long for the team to feel that our focus has moved from the core domain to infrastructure while working on our project B. Even for the slightest change, we were spending too much time, the precious time we used to spend discussing and remodeling the business domain. Not to mention pull requests with more files to review and more code to cover with tests.

Wrapping up

After going through these pain points, I concluded that event sourcing is a natural fit and a powerful supporting tool that strengthens the domain-driven design. Event sourcing brings easier and faster development, testing and debugging. It helps to focus on what is really important — the facts happening in the system shaped as domain events. It facilitates adjusting and continuous improvement of the core domain model. The life of a developer is much easier with event sourcing. But I need to say, it does not mean this should be the primary factor when deciding should a system be event sourced or not. But that is another story for another time.

Author: Nusreta Sinanović