Let’s build a house

Let’s build a house

While staring out of the window of an airbus A319–100, traveling to Amsterdam for “DDD / Event Sourcing Europe”, I had a sudden rush of inspiration and decided to try and scribble a quick blog post.

Here goes nothing…

When talking about software, we often hear the term “architecture” mentioned a lot. That inevitably reminds us of buildings and construction. While this may be true up to some extent, there are a lot of discrepancies between the approaches we take towards architecting a building (a house for example) and a typical web application software system. Much more so if we employ agile practices while doing so (which I hope we all do).

Software architecture is more about defining and formalizing constraints and gathering quality attributes that we need to satisfy while implementing business requirements.

A good architect tries to defer most of the architectural decisions that don’t affect the most important quality attributes directly. These decisions can then be made downstream by the developers in charge of building the actual solution (the architect himself should be one of them).
This indeed means that software architecture “is not” about making “all” of the decisions upfront, but rather, only the important ones.

Let’s look at a simple example. Let us contrast building a house in the real world versus building a virtual house (a hypothetical software system, might be a simple web application).

Some things of concern, but not limited to might be:

  1. Land (Infrastructure)
  2. Floor plans (Architecture / technical diagrams)
  3. Foundations (Initial project setup)
  4. Floors and the roof, walls, bulkheads, etc… (Software Layers)
  5. Functional rooms + furniture (Actual features that bring value)

So how do these compare?

Land (Infrastructure)

When building a house we need to purchase (allocate) the entire piece of land that our house will sit on top of. Plus, we need to account for some more if we maybe decide to make some extensions to our hose (a garage maybe, a garden …).

This implies that most of the time we need to know the size of the land needed upfront which in turn implies that we are required to make a large upfront cost in order to purchase the land even before we even start with the actual construction.

In contrast, when building a software system the agile way, we try to avoid these upfront commitments. Why? Because we can!

Why is this the case?

Ask yourself, what is software? If you think hard enough, you will come to the conclusion that software, in essence, is just a pile of text, compiled to machine language (instructions) in order to be executed to (hopefully) perform some useful work for us people.

And what is the nature of “text”.
Well, most importantly, text can be changed, expanded, erased, reorganized, burned and thrown away. It’s subject to change.

If you think otherwise, you are living a lie and becoming aware of this asap will do you much, much good.

But even if you choose not to be aware of this, I can tell you one thing for sure: The clients are!

So, going back to our example, how much land (infrastructure) do we allocate for our software system? The answer is, as little as possible, or to put it differently, only as much as we need to.

Staying agile is all about iterations, tight feedback loops and building as little as needed when we need it.

We try to avoid making big upfront commitments and incurring costs that are uncalled for. As a side note, if you’ve ever wondered what cloud computing is all about, now you know (pay only for what you use)…

For the software system in our example (our virtual house), we would allocate/purchase only enough resources we need for our first iteration, feature or sprint. Nothing more, nothing less, because unlike with land, we can always but more.

Floor plans

When building a house in the real world, in 99% of the cases we have the whole plan ready upfront. We hire an architect, decide on almost all features (rooms, layout), materials, etc and again, we make another upfront payment and only now we can start building foundations for our house.

After this point, pretty much every decision is immutable, and any change to the initial plan is hard to make or would surely incur a lot of extra unplanned cost because once concrete hardens it’s really tough to break.
That’s just the nature of it.

We take this for granted because we understand the reasoning behind it and have learned to live it.

But surprisingly this isn’t true for software. Many people still compare building software with building houses, and this cannot be further than the truth.

As we said, software is different. A good architect does not make all of the decisions up front. Why? Because software is easy to change (and will change, it’s just text) so we don’t want to limit our legroom that we will most surely need later.

Most importantly, we don’t want to spend any more money than necessary. Will we like that bedroom on the second floor? Will we even need it?!

Everything else

After we have the foundations are laid down, it’s time to start building our floors.

Floor by floor, layer by layer, weeks and months pass while we wait with our fingers crossed and hope for the best.

Finally, when the house is done, final work can begin and only then can we start throwing in the furniture.

A few months later, we finally get to use our house.

I think you can see the pattern by now.

Architecture deals with horizontal slices. Floor by floor, layer by layer, because it’s easier to do it this way (and it’s cost-effective).

But when building software systems, we deal with vertical slices (or should). What does this mean?

When creating software solutions, we start from the most important thing (the core), build all the vertical slices for it, and then work our way out (horizontally) by adding new features to the core or improving the core itself.

In our virtual house example, this could be anything really, but let’s take a bedroom as an example.

Let’s pretend that we sat with the customer and decided that having a place to sleep in asap would be of the highest priority.

Thus, our main goal should be to deliver a single useful feature for our customer which can be tested out and for which we can get constructive feedback at the earliest moment possible.

I hope you can understand why this can be useful (in contrast to waiting for months for first feedback)

So, some of our basic goals are:

How would we go about doing this? We do the simplest thing. We simply follow the path of the least resistance. In analogy to the real house, building our virtual house might follow steps similar to these:

We are now ready to call in our client and receive some constructive feedback about it. Hell, he could even spend the night.

Once we have our feedback, the client decides what is the next most important thing for him…

This way we ensure that the customer always gets what he wants, and not what he asked for! — Remember what we said about the nature of software! Customers have a nose for this and they will always want something other than what they initially specified.

So in a nutshell:

Final words

If you take away anything, let it be this:

Building software is nothing like building a single house/building.

Building software is much more involved than this. To build, evolve and maintain a software system is more similar to city planning than to building a single house. (Let’s face it, it’s never a single house anyway).

Cities like software, have lives of their own. Cities evolve, grow (or even shrink) in size. New buildings are added (features) all the time, old ones are refurbished or demolished(refactoring).

New water and heating lines are added, removed and replaced on a daily basis. Metro lines are bored under the city.

While doing all of this, you can never shut down a city. You can close down some parts of it but as a whole, it must always remain functional.

This is why building software is neither simple nor easy, but only by embracing change, and failure (this might be a topic for another blog post) we can be prepared, and triumph!

Author: Anes Hasičić

Generic repository pattern using Dapper

Generic repository pattern using Dapper

Implementing Repository pattern in Dapper can be done in many ways. In this blog I will explain what repository pattern is and how to implement it using reflection in C#.

When searching around the web I found various implementations none of which was satisfactory — mostly due to having to manually enter table names/field names, too complex implementations with Unit of work etc.

Similar repository, as presented here, is in use in production CQRS/ES system and works quite well.

Even tho use case depicted here is quite “unique”, I think implementation of this repository can be applied for most of the relational base database systems, with minimum refactoring.

What is Repository pattern?

When talking about Repository pattern it is important to distinguish between DDD implementation of repository and generic repositorypattern.

Generic repository is simple contract defined as an interface on per object basis. That means that one repository object is related to one table in database.

DDD repository pattern works with aggregate root object and persists it one or more tables or if event sourcing is used as series of events in event store. So in this instance, repository is actually related not to one database but to one aggregate root which can map to one or more tables. This is a complex process due to impedance mismatch effect which better handled with ORM’s, but this is not our use case.

Generic repository UML diagram:

  • GenericRepository abstract class implements IGenericRepository interface. All shared functionality is implemented in this class.
  • ISpecificRepository interface should have methods required for specific use case( if any)
  • SpecificRepository class inherits from GenericRepository abstract class and implements any specific methods from ISpecifiRepository.

Unit Of Work and transaction handling

Unit of Work pattern implements single transaction for multiple repository objects, making sure that all INSERT/UPDATE/DELETE statements are executed in order and atomically.

I will not be using Unit Of Work but rather save each repository directly using Update/Insert method. The reason for this is that these repositories are designed toward a specific use case detailed below.

All transaction handling is done manually by wrapping multiple repository command into .NET Transaction object. This gives a lot more flexibility without adding additional complexity.

Repository implementation

Let us define interface first:

public interface IGenericRepository<T>
    {
        Task<IEnumerable<T>> GetAllAsync();
        Task DeleteRowAsync(Guid id);
        Task<T> GetAsync(Guid id);
        Task<int> SaveRangeAsync(IEnumerable<T> list);
        Task UpdateAsync(T t);
        Task InsertAsync(T t);
    }

Bootstrap code for repository class has the responsibility to create SqlConnection object and open the connection to database. After that, Dapper will utilize this connection to execute SQL queries against database.

public abstract class GenericRepository<T> : IGenericRepository<T> where T: class
    {
        private readonly string _tableName;

        protected GenericRepository(string tableName)
        {
            _tableName = tableName;
        }
        /// <summary>
        /// Generate new connection based on connection string
        /// </summary>
        /// <returns></returns>
        private SqlConnection SqlConnection()
        {
            return new SqlConnection(ConfigurationManager.ConnectionStrings["MainDb"].ConnectionString);
        }

        /// <summary>
        /// Open new connection and return it for use
        /// </summary>
        /// <returns></returns>
        private IDbConnection CreateConnection()
        {
            var conn = SqlConnection();
            conn.Open();
            return conn;
        }

        private IEnumerable<PropertyInfo> GetProperties => typeof(T).GetProperties();

Make sure you have connection string named MainDb. I am using MSSQL LocalDb, lite MSSQL version database provided with Visual Studio.

<add name="MainDb"
connectionString="Data Source=(localdb)\mssqllocaldb;Integrated Security=true;Initial Catalog=dapper-examples;"
providerName="System.Data.SqlClient"/>

Implementing most of these methods, except for Insert and Update, are quite straightforward using Dapper.

public async Task<IEnumerable<T>> GetAllAsync()
        {
            using (var connection = CreateConnection())
            {
                return await connection.QueryAsync<T>($"SELECT * FROM {_tableName}");
            }
        }

        public async Task DeleteRowAsync(Guid id)
        {
            using (var connection = CreateConnection())
            {
                await connection.ExecuteAsync($"DELETE FROM {_tableName} WHERE Id=@Id", new { Id = id });
            }
        }

        public async Task<T> GetAsync(Guid id)
        {
            using (var connection = CreateConnection())
            {
                var result = await connection.QuerySingleOrDefaultAsync<T>($"SELECT * FROM {_tableName} WHERE Id=@Id", new { Id = id });
                if (result == null)
                    throw new KeyNotFoundException($"{_tableName} with id [{id}] could not be found.");

                return result;
            }
        }

        public async Task<int> SaveRangeAsync(IEnumerable<T> list)
        {
            var inserted = 0;
            var query = GenerateInsertQuery();
            using (var connection = CreateConnection())
            {
                inserted += await connection.ExecuteAsync(query, list);
            }

            return inserted;
        }

 

For SaveRangeAsync list of items is provided which are saved to database and returns number of items saved. This can be made atomic by wrapping foreach in Transaction object.

Implementing Insert and Update queries

Implementing insert and update requires a bit more work. In general idea is to use reflection and extract field names from model class and then generate insert/update query based on field names. Field names will be used as parameter names for Dapper therefore it is important to make sure that DAO class field names are the same as column names in actual table.

In both cases the idea is the same: take object, provided as input parameter, and generate SQL query string with parameters. The only change is that different query is generated, INSERT or UPDATE.

Both methods use reflection to extract field names from model object. This class can be made static, since its not using any of instance variables and for performance purposes.

private static List<string> GenerateListOfProperties(IEnumerable<PropertyInfo> listOfProperties)
        {
            return (from prop in listOfProperties let attributes = prop.GetCustomAttributes(typeof(DescriptionAttribute), false)
                where attributes.Length <= 0 || (attributes[0] as DescriptionAttribute)?.Description != "ignore" select prop.Name).ToList();
        }

 

What this does is extracts a list of attribute names into List<string> using reflection. It will not extract fields marked with ignore description attribute.

Once we have this list we can iterate it and generate actual query:

public async Task InsertAsync(T t)
        {
            var insertQuery = GenerateInsertQuery();

            using (var connection = CreateConnection())
            {
                await connection.ExecuteAsync(insertQuery, t);
            }
        }

private string GenerateInsertQuery()
        {
            var insertQuery = new StringBuilder($"INSERT INTO {_tableName} ");
            
            insertQuery.Append("(");

            var properties = GenerateListOfProperties(GetProperties);
            properties.ForEach(prop => { insertQuery.Append($"[{prop}],"); });

            insertQuery
                .Remove(insertQuery.Length - 1, 1)
                .Append(") VALUES (");

            properties.ForEach(prop => { insertQuery.Append($"@{prop},"); });

            insertQuery
                .Remove(insertQuery.Length - 1, 1)
                .Append(")");

            return insertQuery.ToString();
        }

 

Update method has some small differences:

public async Task UpdateAsync(T t)
        {
            var updateQuery = GenerateUpdateQuery();

            using (var connection = CreateConnection())
            {
                await connection.ExecuteAsync(updateQuery, t);
            }
        }

private string GenerateUpdateQuery()
        {
            var updateQuery = new StringBuilder($"UPDATE {_tableName} SET ");
            var properties = GenerateListOfProperties(GetProperties);

            properties.ForEach(property =>
            {
                if (!property.Equals("Id"))
                {
                    updateQuery.Append($"{property}=@{property},");
                }
            });

            updateQuery.Remove(updateQuery.Length - 1, 1); //remove last comma
            updateQuery.Append(" WHERE Id=@Id");

            return updateQuery.ToString();
        }

 

Additional thing we need to take care of here is what happens if record for updating is not found. There are couple of solutions for this, some include throwing an exception others returning empty object or somehow notifying calling code that update was not done.

In this case we are relying on Dappers executeAsync method which return int which is a number of affected rows.

Example of generic repository usage:

public static async Task Main(string[] args)
        {
            var userRepository = new UserRepository("Users");
            Console.WriteLine(" Save into table users ");
            var guid = Guid.NewGuid();
            await userRepository.InsertAsync(new User()
            {
                FirstName = "Test2",
                Id = guid,
                LastName = "LastName2"
            });


            await userRepository.UpdateAsync(new User()
            {
                FirstName = "Test3",
                Id = guid,
                LastName = "LastName3"
            });


            List<User> users = new List<User>();

            for (var i = 0; i < 100000; i++)
            {
                var id = Guid.NewGuid();
                users.Add(new User
                {
                    Id = id,
                    LastName = "aaa",
                    FirstName = "bbb"
                });
            }

            var stopwatch = new Stopwatch();
            stopwatch.Start();
           
           
            Console.WriteLine($"Inserted {await userRepository.SaveRangeAsync(users)}");

            stopwatch.Stop();
            var elapsed_time = stopwatch.ElapsedMilliseconds;
            Console.WriteLine($"Elapsed time {elapsed_time} ms");
            Console.ReadLine();
        }

 

Use case in CQRS/ES Architecture

CQRS stands for Command Query Responsibility Segregation, and is an architectural pattern, which separates read model from write model. The idea is to have two models which can scale independently and are optimized for either read or write.

Event Sourcing(ES) is a pattern which states that state of the object is persisted into database as list of events and can be later reconstructed to the latest state by applying these events in order.

I will not go into explaining what these two patterns are and how to implement them, but rather focus on specific use case I’ve dealt with in one of my projects: How to use relational database (MSSQL) for read model and event store, utilizing data mapper Dapper and Generic repository pattern. I will also touch, albeit briefly, event sourcing using the same generic repository.

Example architecture:

Before going any further let us consider why using data mapper would be more beneficial than using ORM for this particular case:

Impedance Mismatch in Event Sourcing

An object-relational impedance mismatch refers to a range of problems representing data from relational databases in object-oriented programming languages.

Impedance mismatch has a large cost associated. Reason for this is that developer has to know both relational model as well as object oriented model. Object Relational Mappers(ORMs) are used to mitigate this issue but not eliminate it. They also tend to introduce new problems: like virtual properties requirement by EF, private properties mapping issue, polluting domain model etc.

When using using Events as storage mechanism in Event Store, there is no impedance mismatchThe reason for this is that events are domain concept and are persisted directly in Event store without any need for object relational mapping. Therefore, need for using ORM is minimal, and using Dapper/Generic repository becomes more practical.

Database model considerations

In this use case MSSQL will be used for both write and read sides, which adds to re-usability for dapper repository since it can be used on both read and write sides.

Primary key consideration

In this example I used Guid (.NET Guid and uniqueidentifier MSSQL datatype) as primary key identifier. It could have been something else like long or int, or string.

In any case this would require some additional work on Dapper repository. First, interface would need to change to accept additional primary key type. Then, depending on the type, there might be some additional work to modify queries generated.

Having more than one column as primary key would also imply some additional work and in this case using dapper/generic repository pattern would probably counter productive. We should opt for using full blown ORM in this case!

Bulk inserts with Dapper

Dapper is NOT suitable for bulk inserts, ie. performing large number of INSERT statements. The reason is that ExecuteAsync method, internally will use foreach loop to generate insert statements and execute them. For large number of records this is not optimal, and I would recommend using either SQL Bulk copy functionality or Dapper extension which allows bulk copy(its commercial extension) or simply bypassing dapper and working with database directly.

Transactions handling

Use case for applying transaction is when saving into more than one table atomically. Saving into event store can be this example: save into AggregateRoot table and Events table as one transaction.

Transactions should be controlled manually either on Command level (in CQRS Command implementation) or inside repository.

This example with two tables is inspired by Greg Young’s design which can be found here: https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf

using (var transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
            {
                // Fetch Aggregate root if exists, create new otherwise
                var aggRoot = await _aggregateRootRepository.FindById(aggregateId.Identity);
                if (aggRoot == null)
                {
                    aggRoot = new AggregateRootDao
                    {
                        Id = Guid.NewGuid(),
                        CreatedAt = DateTime.Now,
                        AggregateId = aggregateId.Identity,
                        Version = 0,
                        Name = "AggregateName"
                    };

                    await _aggregateRootRepository.InsertAsync(aggRoot);
                }
                else
                {
                    if (originatingVersion != aggRoot.Version)
                        throw new EventStoreConcurrencyException($"Failed concurrency check for aggregate. Incoming version: {originatingVersion} Current version: {aggRoot.Version}");
                }

                // Optimistic concurrency check
                var domainEvents = events as IDomainEvent[] ?? events.ToArray();

                foreach (var e in domainEvents)
                {
                    // Increment Aggregate version for each event
                    aggRoot.Version++;
                    e.Version = aggRoot.Version;

                    // Store each event with incremented Aggregate version 
                    var eventForRoot = new EventDao()
                    {
                        CreatedAt = e.CreatedAt,
                        AggregateRootFk = aggRoot.Id,
                        Data = JsonConvert.SerializeObject(e, Formatting.Indented, _jsonSerializerSettings),
                        Version = aggRoot.Version,
                        Name = e.GetType().Name,
                        Id = e.Id,
                    };

                    await _eventsRepository.InsertAsync(eventForRoot);
                }

                // Update the Aggregate
                await _aggregateRootRepository.UpdateAggregateVersion(aggRoot.Id, aggRoot.Version);


                transaction.Complete();
            }

 

If aggregate root is not found, it is created and inserted in AggregateRoot table. After that each event is converted to domain event and saved into Events table. All this is wrapped in transaction and will either fail or succeed as an atomic operation. Note that transaction has TransactionScopeAsyncFlowOption.Enabled option, which allows transaction to invoke async code inside its body.

Conclusion

Implementation here can be further optimized for use in CQRS/ES systems however that is outside of the scope of this post. This implementation gives enough flexibility to extend required specific repository with new functionality easily, just by adding custom SQL queries to the repository.

Author: Damir Bolić

Domain Driven Design and the art of Pushing Back

Domain Driven Design and the art of Pushing Back

Closing thoughts

In order to successfully practice domain driven design, developers can’t just be given requirements. They need to be actively involved in refining business processes and suggest changes to them. Period.

Author: Anes Hasičić

TDD & Pair Programming in practice

TDD & Pair Programming in practice

When I was a Java software engineer with a few months of experience I encountered TDD only through participating in a few workshops or from books & blogs. I thought “yeah, yeah, it is all nice in theory, but who really uses TDD in practice”? C’mon, really? Just do a story or a task as fast and as good as you can, maybe write some integration or unit tests, maybe leave it “for later” … meh… who really cares… It’s a legacy project so we don’t want to put too much effort into it and hopefully we’ll rewrite it soon into a better technology and/or framework. What about Pair programming? LOL. No way am I going to work with some random dude all the time. What if the person annoys me or they will not want to do anything, or even worse, what if they are so much better and faster that they will not want to explain anything and I will not get the chance to learn? Also, who has the responsibility for the story in pair programming?

All these were my prejudice & questions about the aforementioned concepts. In this blog, I’m going to tell you how working on a Belgian public sector project has given me an opportunity to work in pair & made me disciplined about writing tests before the implementation.

TDD

TDD (Test Driven Development) is, simply put, a way of developing where we first write a test and the implementation comes afterwards. To avoid repeating or copying definitions, I’ve provided some useful links for you to check out, and the rest you can find easily on the Internet.

Here you can watch what Uncle Bob has to say about it:

https://www.youtube.com/watch?v=AoIfc5NwRks

If you wish to practice TDD, there are some cool examples on the web like this:

https://osherove.com/tdd-kata-1

Also, it’s always nice to read a book:

https://www.goodreads.com/book/show/387190.Test_Driven_Development

Pair programming

Pair programming is, in short, two people developing at the same time. This means that one developer types the code while the other one gives suggestions and warns about possible bugs in the code. Usually they switch in code writing and assisting. Sometimes one writes a test and the other one writes the implementation or one writes code for the first half of a working day and the other one writes for the second half of the day. It all depends on how two people like to work, but the idea is that both members of the pair are focused on the work at the same time in order to produce quality and bug-free code.

You can read more about pair programming here:

http://www.extremeprogramming.org/rules/pair.html

https://www.codementor.io/pair-programming

How does it look in practice?

Way back, when I had only a few months of experience as a Java web developer, I didn’t even know there was a different way of programming than: “do the implementation and after that make sure it works by writing tests for it”. The first time I heard about TDD was during a Spring framework introduction workshop I attended, but I wasn’t familiar with pair programming until I started working at my current company Tacta.io. There I’ve been told that such an approach leads to better code quality & a smaller number of bugs in the code. Ok. I understood that TDD is in theory the “right” way to develop code, but I was skeptical at first if people really develop in a TDD way and how pair programming even works or helps.

My first real experience with TDD and pair programming happened while working on a big project with around 1M 890k lines of code (1M 007k were the test code). We are also talking about the project that had its first commit in 2010 after 2 failed developing attempts that hadn’t followed the concepts mentioned earlier. So how do you work on that kind of project and keep it maintainable for all these years?

Some of the most important ways to help keep the code base maintainable is not only using TDD & pair programming, but also developing according to clean code and DDD (Domain Driven Design) principles. All this pays off in the long run, especially when you have complicated domain and business logic in the project.

Alongside things mentioned above, we also have a strict development process that we follow:

  • Daily standups – say what you did yesterday, what are you doing today & is something blocking you
  • Pair modeling – a process of creating a document that two people wrote with functional & technical summary of a story that needs to be implemented
  • Story kickoff – after the pair modeling, a story is divided into tasks and presented to the team. The team asks questions and also gives suggestions if something can be done better
  • Pair programming – two developers actively develop a story, one takes a lead on the story which means he/she will work on it until it’s done, and the other member of the pair changes on the daily basis
  • TDD – we strongly try to keep the discipline of writing tests first. Unit tests usually go first and the implementation afterwards. Production code is also backed up with integration and end-to-end tests
  • DDD – while developing, we should be aware that the project must follow DDD concepts and rules
  • Proxy check – check the story with proxies (proxies are people who talk to users and make sure that stories they wrote are functionally OK)
  • Merge/pull requests – create a merge request so that other developers can take a look at your code and approve the code or request changes if necessary

Yeah, yeah, it is all nice in theory, but in practice who really uses TDD?

We do. After a story kickoff you sit with your other half of the pair for that day and you start working on the story. First you create a test, then implementation, then some more tests and so on. Of course, do not forget to refactor! Alongside unit tests, often you need to add integration tests and/or end-to-end Selenium tests to check if everything is ok when user works with the web application on a browser.

No way am I going to work with some random dude all the time!

IMHO, working with people is sometimes the hardest part of my job, but also, working in pair has more benefits than disadvantages. Yeah, it can happen that you aren’t so fond of the person who is pairing with you that day, but that happens rarely and you switch pairs every day. Sometimes discussions can take a lot of time, but in the end, they also teach you how to defend and explain why you do what you do & to develop communication “soft” skills. Also, I learned a lot about how other people think & work. With pair programming you get a chance to share knowledge a lot, because you are literally working next to another developer on the same thing. In the end you do not escape responsibility. You do share it with your pairs, but if you are the story lead, then you should ensure that it is implemented as best as possible in the code and functionally.

TDD & pair programming often work as shown in this simplified scenario:

Developer A: Oh, this story is easy like we talked on the kickoff! Let’s add a method to the service to search for files that have status “ACTIVE”.

Developer B: Great, we are working according to TDD so let’s start with a unit test! We should add a test that creates two mock files. First one should have “INACTIVE” status and the other file should have “ACTIVE” status. This test will then call our method and assert that it returns a list with only one “ACTIVE” file.

Developer A: Nice, I’ll do the test and you can do the implementation and after that we switch?

Developer B: It’s a deal. I’ll help you by watching and commenting and I expect from you the same when I’ll be writing code.

Hopefully our two developers created some quality code with good test coverage. But even if they didn’t, they still have to pass the proxy check and merge request reviews. 🙂

Conclusion

After working for more than a year according to TDD and in pair, some of my opinions on those subjects have changed:

  • TDD changes the way you think and work. The ability to create a test before the implementation proves that you understand what needs to be done. With each test you improve and refactor your code, and tests ensure that the code does what it was intended for.
  • If you are working on a big project, it is hard to know everything and working in pair speeds up the development.
  • Two people usually do the discussions and explanations out loud and that maybe wouldn’t happen so often with “solo” development. This leads to better knowledge sharing, code quality and edge case test coverage.
  • I met some amazing people and each of them has their own unique way of thinking. I learned a lot from them – sometimes it was the use of some new IDE shortcuts, sometimes it was how to think in DDD way and sometimes we learned together about the project and technologies like Java language, Spring framework or Hibernate.
  • I’ve found my preferred way of how to pair program and, in my experience, I remain most focused if I switch roles based on test/implementation cycle.

I hope that this blog has given you some idea how TDD and pair programming look in practice. If you have more questions about it, feel free to send me an email at: luka.sekovanic@tacta.io.

Author: Luka Sekovanić

Organizing software development

Organizing software development

I’ve worked in software development for 20 years. Through those years, in the context of dozens of different projects in different industries, I have witnessed the evolution of software process organization from Waterfall, to RUP until nowadays, agile methodologies.

What is so challenging and difficult about software development processes that sparks so many discussions and even requires people and organizations who make a living by simply advising on the matter?

Well, my answer is – quality of the people. Second thing is – size of the team and the third thing – leadership by example.

 

I will start with an example of a project I once worked on back in 2004. Five guys (me included), were assigned with a task to develop a new worldwide part ordering system for a client in the automotive sector. Existing database (mainframe) was there. The requirements were there. We had no process at all. Our lead architect was developing with us, the analyst was sitting at a table next to us. The manager would come from time to time with a demand for the next delivery, we would all listen to him at that moment and return to work after that. Everything on that project went perfectly smooth and in time and budget. How would we call this? A success story that was agile even before the term was even used? Could be.

But an even better answer is that this was a project where only people with enough common sense and ownership were involved. And there were not too many of us. The guy who knew everything about the domain really cared about the business and was coding with us.

The following is an example of another extreme. A project with almost 100 developers, many managers, testers, analysts – all paid from a huge government budget. In the end, it did not deliver anything useful and huge amounts of money were spent. Many experienced process organizing experts were paid to come and help. It simply did not work. Ownership was not encouraged; common sense was lost long ago. The size of the team compromised both ownership and common sense in a very short period.

Why was the team so big, and why did we not move forward? The answers to these questions are specific each time something like this happens, but the rule is to never get in a situation where you need so many people working under the same deadline – unless the type of work you are doing can be done by a well-written script – there are products these days that generate source code for that type of work.

 

Common sense. Ownership. Size of the team. Lead by example. Agile concepts are nothing more but a derivative of these few terms, written by people who were lucky enough to spend time in such environments.

Why are we unable to ‘stabilize ‘software development in a sense that ‘throwing’ enough people at it will always solve the problem in an amount of time that is proportional to the team size increase? To make a comparison with another industry, let’s say that I am about to build a huge building and the architecture is already worked out in detail. My thinking is that having 100 construction workers will probably be 40 times faster than building it with 2 of them. I would lose some efficiency in the overhead of organizing all those people, assigning them to their next task in an optimal way – but it surely would go way faster. In software development, that is not the case – things could in fact even get worse – and the question is why.

Developing software means – making something new, something yet not developed. Otherwise you probably wouldn’t even get the assignment to build it. Or, if someone else has developed something similar before – this time it needs to be rebuilt, slightly differently and with entirely new technologies, in a different context, and for a different client. In other words, each time a software endeavor is about to start, it requires some ice breaking. Ice breaking is not done with hundreds of ships that know how to sail (knows “how to code”), it is done by a limited number of ships that are skilled in ice breaking. Only after, and not before, the sea is open for sailing, you can put hundreds of other ships, with a simple purpose of traveling from point A to point B. But, by the time the sea is open for sailing, you realize that in fact there is no need for hundreds of ships at all.  We realize that we were in fact mislead by someone telling us that having our sea cleared out of ice will need years and hundreds of ships to accomplish.

 

Actual coding – sailing in the open sea with some ice blocks floating around – is the easy and fun part of any software project. Good ships (developers) are needed to navigate and avoid those remainders of the floating ice. To tackle them by recognizing and challenging badly written requirements, by writing well designed and readable code, by making a test for every change of the code in order to avoid hitting any ice ever again.

Heading towards the exact same destination is the next challenge, a challenge that we can overcome only with perfect communication, that is, a team small enough to think and talk alike.

Good software development is about having a small team of people with willingness (ownership) and capability to understand what the business actually needs (create and implement well written user stories). It is about making sure that the chosen technologies will always work in cohesion (continuous integration); that the whole thing will actually deploy on the required platform and that the delivered solution will effectively be maintainable in the long run (use test driven development).

 

Author: Sejo Ćesić      

 

 

TACTA Internship

TACTA Internship

In December 2018, we started a .NET internship for the first time in Tacta. We had some experience in mentoring, but not in organizing an internship that simulates a real working environment. Having little experience in this area, but led by the idea that our principles should be applied to the internship, we succeeded!

We wanted to provide interns an experience of a real agile environment that includes daily scrums, planning, and retrospective.

We also wanted the interns to get familiar with the best practices in our company such as pair-programming, test-driven development and domain-driven design.

It was a challenge to incorporate all these things into the internship, but it was also exciting. Together with the interns, we co-created the internship. It turns out that we learned a lot through the whole process, but most importantly, we got a chance to meet and spend time with great people.

 

We asked the interns to tell us what they think about the internship, and here is what they said:

 

“After spending almost three years of my life attending college lectures and mostly learning about theoretical subjects in the IT world, I felt it was time to put everything I learned there to a test in a more practical environment. My internship at Tacta turned out to be exactly the challenge I wanted.

Working in a team and experiencing first-hand the marvels of pair-programming really gave me a different perspective on programming, and how much more effective and fun it is when you have an extra set of eyes focusing on the task at hand. Consistent code reviews and a culture which encourages everyone to speak their mind and constantly ask questions were something that really made a huge impact on how fast I was progressing and learning stuff that I thought would take me a long time to learn.

The domain of knowledge which this internship has instilled in me was broader than I imagined. Learning the ins and outs of .NET Core and Angular quickly became something that is fairly trivial, the real challenge and, the most interesting and important aspect (to me) was learning about and implementing test-driven development along with trying to grasp some of the domain driven design principles. That definitely felt very unique and that’s where the technical aspect of this internship really shines through.

Along with Tarik (our mentor), it really felt great that everyone at Tacta was more than kind and always willing to lend a helping hand, and made us feel like we really were, for those three months, a part of a family.” – Nermin

 

 

“The reason I chose this internship was the technology that was advertised, .Net. We focus on .Net at the Faculty, and I thought, it could be nice to learn how technology is used in a business environment. Yet, the reasons why I loved the internship had little to do with the technologies itself but with the “philosophy” I was taught. We were advised more on how to think when working, what was a proper way to build a system, how to communicate with your colleagues, clients and superiors, basic principles of DDD and TDD. Also, I’d like to mention that the atmosphere in the company was nice, we were able to talk to almost all employees, including the CEO himself and it was nice that our mentor, Tarik, allowed us to organize our time and work among ourselves. I like to think that I improved a lot at Tacta, as a developer, and as a team member. This concerns both the technologies we used and the way of thinking in a business environment.” – Selim

 

“I’m not sure where to start in describing my journey as a Tacta intern…

Why did I apply? What did I expect? To be honest, like many of my friends at the faculty, at one point in time I was aware that school projects and other faculty-related stuff did not reflect real-world programming. It all looked fake and unreal in my eyes. I knew I had to expand my knowledge and explore other ideas and concepts. I wanted to meet the people from the industry and try to see how they tackle everyday programming challenges. To see programming beyond “just making a program work”. This internship was definitely an eye opener for a lot of things stated above.

From the first day of the internship the learning process started and, really, it never stopped. In a matter of days, I found myself working in the team of young people, tackling challenges and discussing ideas. I would like to emphasize the discussion part, as we discussed pretty much everything. From the basic ideas to the problems that we encountered. There were no stupid questions. A big part of this discussion was definitely our mentor Tarik who followed us through our journey and made sure we were right on track. Soon I saw a lot of things that you only read about in school, put in practice. Pair programming, DDD, TDD, clean code and so on. It was the first time that all these things made sense and before I knew it, I wasn’t just thinking of how to make something work, I was thinking of how to make it the right way. Probably the greatest revelation for me was acknowledging the importance of non-technical skills, or as people like to call them these days – soft skills. We had to learn to communicate correctly, listen actively, present our ideas clearly, manage our time and so on. Not really the stuff that first crosses your mind when talking about software development, at least not in the eyes of developers.

I could go on and on about all the great people I have met and other cool things that I have learned, but I believe that you get the point that I’m trying to make. All in all, I can really say that Tacta internship was one heck of a journey that I recommend to everyone.” – Dejan

 

“After completing lectures at the Faculty, with only one exam left to complete my education and get my Bachelor’s degree in Information Technology, I realized that I would have plenty of free time that I should use in the right way. Then I heard that Tacta offers internships for students. I was thinking that an internship could be the first step towards becoming a developer and a starting point for my career. So, I applied and, fortunately, they recognized my potential and desire to learn. The internship lasted for 3 months and on the very first day we entered the real dynamic world of the IT industry. We were in a group of 4 and we had a mentor. Our mentor and other employees in the company were always available for all the questions we had and were always willing to share their knowledge with us. Having the opportunity to work in an agile environment was a new and very interesting experience for me. At the Faculty, I learned what standups, sprints, sprint retro and all elements of agile methodology are in theory but through this internship I had the opportunity to experience it in real life. Also, I had a chance to experience pair programming technique. It was difficult in the beginning but from day to day, our communication skills were improving, and I realized that the development process becomes faster because two heads are better than one. We used TDD and writing a test for each functionality helped me figure out how testing is an important part of any software development methodology. It is required to detect bugs and defects before the final product is delivered to the client and it is an essential component of successful development projects. We worked on a project from start to finish and learned everything that is needed for web development and more. The internship covered every bit of the software development process and taught us how to become full stack developers. In just a few months I learned how to create backend and frontend parts of the application in some highly popular frameworks. By completing each assignment, the mentor had taught us the best practices of coding. For the backend part of apps, we developed a RESTful web API in C# and .NET Core. By making a frontend part, I stepped into the amazing world of JS frameworks which I had no previous experience of. Using Angular and Material Design, we built an interactive and dynamic application that looks very nice. During the internship program I became familiar with a lot of helpful things that every junior developer needs to know like using git, design patterns etc. In addition to technical skills, I have also improved my soft skills and this internship gave me a lot of confidence and courage to step into the world of IT and look for my dream job.” – Lejla