I’ve worked in software development for 20 years. Through those years, in the context of dozens of different projects in different industries, I have witnessed the evolution of software process organization from Waterfall, to RUP until nowadays, agile methodologies.
What is so challenging and difficult about software development processes that sparks so many discussions and even requires people and organizations who make a living by simply advising on the matter?
Well, my answer is – quality of the people. Second thing is – size of the team and the third thing – leadership by example.
I will start with an example of a project I once worked on back in 2004. Five guys (me included), were assigned with a task to develop a new worldwide part ordering system for a client in the automotive sector. Existing database (mainframe) was there. The requirements were there. We had no process at all. Our lead architect was developing with us, the analyst was sitting at a table next to us. The manager would come from time to time with a demand for the next delivery, we would all listen to him at that moment and return to work after that. Everything on that project went perfectly smooth and in time and budget. How would we call this? A success story that was agile even before the term was even used? Could be.
But an even better answer is that this was a project where only people with enough common sense and ownership were involved. And there were not too many of us. The guy who knew everything about the domain really cared about the business and was coding with us.
The following is an example of another extreme. A project with almost 100 developers, many managers, testers, analysts – all paid from a huge government budget. In the end, it did not deliver anything useful and huge amounts of money were spent. Many experienced process organizing experts were paid to come and help. It simply did not work. Ownership was not encouraged; common sense was lost long ago. The size of the team compromised both ownership and common sense in a very short period.
Why was the team so big, and why did we not move forward? The answers to these questions are specific each time something like this happens, but the rule is to never get in a situation where you need so many people working under the same deadline – unless the type of work you are doing can be done by a well-written script – there are products these days that generate source code for that type of work.
Common sense. Ownership. Size of the team. Lead by example. Agile concepts are nothing more but a derivative of these few terms, written by people who were lucky enough to spend time in such environments.
Why are we unable to ‘stabilize ‘software development in a sense that ‘throwing’ enough people at it will always solve the problem in an amount of time that is proportional to the team size increase? To make a comparison with another industry, let’s say that I am about to build a huge building and the architecture is already worked out in detail. My thinking is that having 100 construction workers will probably be 40 times faster than building it with 2 of them. I would lose some efficiency in the overhead of organizing all those people, assigning them to their next task in an optimal way – but it surely would go way faster. In software development, that is not the case – things could in fact even get worse – and the question is why.
Developing software means – making something new, something yet not developed. Otherwise you probably wouldn’t even get the assignment to build it. Or, if someone else has developed something similar before – this time it needs to be rebuilt, slightly differently and with entirely new technologies, in a different context, and for a different client. In other words, each time a software endeavor is about to start, it requires some ice breaking. Ice breaking is not done with hundreds of ships that know how to sail (knows “how to code”), it is done by a limited number of ships that are skilled in ice breaking. Only after, and not before, the sea is open for sailing, you can put hundreds of other ships, with a simple purpose of traveling from point A to point B. But, by the time the sea is open for sailing, you realize that in fact there is no need for hundreds of ships at all. We realize that we were in fact mislead by someone telling us that having our sea cleared out of ice will need years and hundreds of ships to accomplish.
Actual coding – sailing in the open sea with some ice blocks floating around – is the easy and fun part of any software project. Good ships (developers) are needed to navigate and avoid those remainders of the floating ice. To tackle them by recognizing and challenging badly written requirements, by writing well designed and readable code, by making a test for every change of the code in order to avoid hitting any ice ever again.
Heading towards the exact same destination is the next challenge, a challenge that we can overcome only with perfect communication, that is, a team small enough to think and talk alike.
Good software development is about having a small team of people with willingness (ownership) and capability to understand what the business actually needs (create and implement well written user stories). It is about making sure that the chosen technologies will always work in cohesion (continuous integration); that the whole thing will actually deploy on the required platform and that the delivered solution will effectively be maintainable in the long run (use test driven development).
Author: Sejo Ćesić
In December 2018, we started a .NET internship for the first time in Tacta. We had some experience in mentoring, but not in organizing an internship that simulates a real working environment. Having little experience in this area, but led by the idea that our principles should be applied to the internship, we succeeded!
We wanted to provide interns an experience of a real agile environment that includes daily scrums, planning, and retrospective.
We also wanted the interns to get familiar with the best practices in our company such as pair-programming, test-driven development and domain-driven design.
It was a challenge to incorporate all these things into the internship, but it was also exciting. Together with the interns, we co-created the internship. It turns out that we learned a lot through the whole process, but most importantly, we got a chance to meet and spend time with great people.
We asked the interns to tell us what they think about the internship, and here is what they said:
“After spending almost three years of my life attending college lectures and mostly learning about theoretical subjects in the IT world, I felt it was time to put everything I learned there to a test in a more practical environment. My internship at Tacta turned out to be exactly the challenge I wanted.
Working in a team and experiencing first-hand the marvels of pair-programming really gave me a different perspective on programming, and how much more effective and fun it is when you have an extra set of eyes focusing on the task at hand. Consistent code reviews and a culture which encourages everyone to speak their mind and constantly ask questions were something that really made a huge impact on how fast I was progressing and learning stuff that I thought would take me a long time to learn.
The domain of knowledge which this internship has instilled in me was broader than I imagined. Learning the ins and outs of .NET Core and Angular quickly became something that is fairly trivial, the real challenge and, the most interesting and important aspect (to me) was learning about and implementing test-driven development along with trying to grasp some of the domain driven design principles. That definitely felt very unique and that’s where the technical aspect of this internship really shines through.
Along with Tarik (our mentor), it really felt great that everyone at Tacta was more than kind and always willing to lend a helping hand, and made us feel like we really were, for those three months, a part of a family.” – Nermin
“The reason I chose this internship was the technology that was advertised, .Net. We focus on .Net at the Faculty, and I thought, it could be nice to learn how technology is used in a business environment. Yet, the reasons why I loved the internship had little to do with the technologies itself but with the “philosophy” I was taught. We were advised more on how to think when working, what was a proper way to build a system, how to communicate with your colleagues, clients and superiors, basic principles of DDD and TDD. Also, I’d like to mention that the atmosphere in the company was nice, we were able to talk to almost all employees, including the CEO himself and it was nice that our mentor, Tarik, allowed us to organize our time and work among ourselves. I like to think that I improved a lot at Tacta, as a developer, and as a team member. This concerns both the technologies we used and the way of thinking in a business environment.” – Selim
“I’m not sure where to start in describing my journey as a Tacta intern…
Why did I apply? What did I expect? To be honest, like many of my friends at the faculty, at one point in time I was aware that school projects and other faculty-related stuff did not reflect real-world programming. It all looked fake and unreal in my eyes. I knew I had to expand my knowledge and explore other ideas and concepts. I wanted to meet the people from the industry and try to see how they tackle everyday programming challenges. To see programming beyond “just making a program work”. This internship was definitely an eye opener for a lot of things stated above.
From the first day of the internship the learning process started and, really, it never stopped. In a matter of days, I found myself working in the team of young people, tackling challenges and discussing ideas. I would like to emphasize the discussion part, as we discussed pretty much everything. From the basic ideas to the problems that we encountered. There were no stupid questions. A big part of this discussion was definitely our mentor Tarik who followed us through our journey and made sure we were right on track. Soon I saw a lot of things that you only read about in school, put in practice. Pair programming, DDD, TDD, clean code and so on. It was the first time that all these things made sense and before I knew it, I wasn’t just thinking of how to make something work, I was thinking of how to make it the right way. Probably the greatest revelation for me was acknowledging the importance of non-technical skills, or as people like to call them these days – soft skills. We had to learn to communicate correctly, listen actively, present our ideas clearly, manage our time and so on. Not really the stuff that first crosses your mind when talking about software development, at least not in the eyes of developers.
I could go on and on about all the great people I have met and other cool things that I have learned, but I believe that you get the point that I’m trying to make. All in all, I can really say that Tacta internship was one heck of a journey that I recommend to everyone.” – Dejan
“After completing lectures at the Faculty, with only one exam left to complete my education and get my Bachelor’s degree in Information Technology, I realized that I would have plenty of free time that I should use in the right way. Then I heard that Tacta offers internships for students. I was thinking that an internship could be the first step towards becoming a developer and a starting point for my career. So, I applied and, fortunately, they recognized my potential and desire to learn. The internship lasted for 3 months and on the very first day we entered the real dynamic world of the IT industry. We were in a group of 4 and we had a mentor. Our mentor and other employees in the company were always available for all the questions we had and were always willing to share their knowledge with us. Having the opportunity to work in an agile environment was a new and very interesting experience for me. At the Faculty, I learned what standups, sprints, sprint retro and all elements of agile methodology are in theory but through this internship I had the opportunity to experience it in real life. Also, I had a chance to experience pair programming technique. It was difficult in the beginning but from day to day, our communication skills were improving, and I realized that the development process becomes faster because two heads are better than one. We used TDD and writing a test for each functionality helped me figure out how testing is an important part of any software development methodology. It is required to detect bugs and defects before the final product is delivered to the client and it is an essential component of successful development projects. We worked on a project from start to finish and learned everything that is needed for web development and more. The internship covered every bit of the software development process and taught us how to become full stack developers. In just a few months I learned how to create backend and frontend parts of the application in some highly popular frameworks. By completing each assignment, the mentor had taught us the best practices of coding. For the backend part of apps, we developed a RESTful web API in C# and .NET Core. By making a frontend part, I stepped into the amazing world of JS frameworks which I had no previous experience of. Using Angular and Material Design, we built an interactive and dynamic application that looks very nice. During the internship program I became familiar with a lot of helpful things that every junior developer needs to know like using git, design patterns etc. In addition to technical skills, I have also improved my soft skills and this internship gave me a lot of confidence and courage to step into the world of IT and look for my dream job.” – Lejla
For those that haven’t really moved past the blue book…
DDD for many, including me, brought back the enjoyment of software development. Implementation becomes easy when you break down the domain. The bits and pieces suddenly start to “fit” together in a way they did not before, and the implementation itself becomes straight-forward which results in a simple, maintainable and easy to understand code that will outlive the development team itself.
DDD has come a long way since “the blue book”, but in my experience, not enough people realize that DDD is a growing, evolving thing and that it has indeed learned a few new tricks along the way. This blog is a brief recap of some of the notable things that happened in the DDD community during the last decade or so.
Can I DDD?
First things first. What do you need exactly to successfully implement DDD? What are the prerequisites without which it wouldn’t be feasible?
According to Eric Evans, there are two main ones:
- Iterative development process
- Access to domain experts
Iterative development, in my opinion, is a life force of DDD. It enables experimentation, exploration, and calls for active refactoring of the problem domain as you continue to gain more insight alongside your domain experts. Additionaly you make use of tight feedback loops which involve feedback obtained through the actual usage of the system that is being constructed.
When you really come to think about it, what are the odds that you have discovered the “best” model for your domain the first time? Even the first couple of times?
The chances are pretty slim. Especially so due to the fact that there is no such thing as “the perfect” model for almost anything that you might be modeling.
Domain model is a living thing. It grows and evolves over time through active discovery and is never really done. It only gets to be “good enough” at best. Which is totally fine.
One of the significant mistakes we all do is, slipping towards perfectionism. As we already made clear, DDD depends on iteration, so don’t get caught up in the details too early. Do the first prototype quickly, then get to the second one quick etc…
“Perfect is the enemy of good” and perfectionism prevents you from doing enough innovation.
With that being said, don’t settle for the first useful model you may encounter. Keep iterating and always rigorously refine and keep watching for even the most innocent looking workarounds. They are an indication of a non-optimal model, and it’s almost certain that you have missed a modeling opportunity somewhere along the way.
Iterative development approach makes heavy use of domain experts. It’s hard to have one without the other, and missing out on any of those will most certainly make your DDD efforts futile.
At this point, you might be thinking: “Well, what about DDD lite?” It provides me with a lot of useful abstractions and modeling tools. Couldn’t I get away by just using the tools and abstractions that DDD lite approach provides?
Sorry, but, the answer is no. DDD Lite can only get you so far due to its nature. It provides a set of modeling tools/building blocks (e.g. entities, repositories, value objects etc…) which will help you implement DDD itself! By only using DDD lite, you actually miss out on DDD altogether…
What use are all of the abstractions and modeling tools, if you don’t have a good idea of what you are building, or even worse… If you are building the wrong thing?
Explicit context boundaries and the Core Domain
Focusing on the Core Domain, as Eric Evans puts it, is a game changer.
Focus your DDD efforts on your Core Domain. The stuff that really makes your company stand out from the competition. The thing that gives you an edge and a competitive advantage on the market.
Companies can waste so much time, money and effort by reinventing the wheel and applying DDD to the parts of their domain that could have gotten away with a more simplistic approach or even be replaced by existing, off the shelf solutions.
But, in order to identify your true core domain, you will need to define explicit context boundaries through any of the context mapping techniques (I suggest you look up Event Storming, more about it later…).
With all of this being said, it takes a certain level of discipline to keep the bounded contexts separate, but it yields great benefits and almost any project whether it makes use of DDD or not, big or small, can benefit off of context mapping and having explicitly defined context boundaries which will separate really important parts of the domain, from the less important ones, and will ultimately help you identify your Core Domain, and that’s where most of your DDD efforts need to be directed.
Context mapping and the big ball of mud
What do you do if you are dealing with a legacy system, a big ball of mud? How do you get a taste of DDD there (assuming you still can employ the iterative development approach, and have access to domain experts)?
Well, just because the legacy system i.e.. “the big ball of mud” exists, it does not mean you have to keep cramping it with new features, but rather, employ your context mapping techniques here. Draw a line around it and say, “this is my big ball of mud”, and after that draw a line around your new service and treat it as a separate bounded context.
As Evans puts it, it’s probably inevitable that your service will get enclosed by the big ball of mud eventually (since it will eat almost anything), but at least you had a nice run for a time.
A word on DDD building blocks
Let’s touch upon DDD lite again quickly.
Building blocks (Entities, Value Objects, Factories, Repositories …) are overemphasized! — Eric Evans
Yes, you heard it right. Building blocks have gotten too much attention, but don’t get me wrong, they are still important and provide a great value. Building blocks are what they are. They are a means to an end, mere implementation details that help you implement strategic DDD patterns.
As Evans stated, he regrets putting the strategic patterns way back at the end of the book, which probably resulted in many people giving more importance to the tactical patterns because they come first or even worse, they get so caught up in the intricate implementation details of tactical patterns, they don’t even get to the most important part of the book.
The thing to take away is that building blocks provide you the tools to implement DDD, but you should give much more focus to strategic patterns, even more so because tactical patterns/building blocks will continue to evolve, some will become obsolete, new ones will be added (e.g. Domain Events).
Aggregate represents a conceptual whole in the domain that is also consisted of other small parts (Value Objects and/or other Entities) and protects an always consistent invariant across all of them.
One question that pops up often is a concern that a lot of people have regarding the awkward cases where their aggregates have an invariant that crosses thousands of entities.
Since we can access a child entity only through its parent aggregate, do we load all of them each time we load an aggregate? Do we make use of lazy loading? Do we model this differently even though it’s a business invariant? If yes, what’s the correct model for this?
The bottom line is that OO is really not good at handling collections of objects, especially very large collections, which calls for loosening and “bending” the rules a bit in these cases.
The same applies if you have a small number of aggregates but have a lot of concurrent users that might be interacting with the aggregate at the same time.
In cases like these, you might consider modeling those entities as separate aggregate(s) and try enforcing the business invariant on a higher level. For example, in domain services, process managers/sagas etc…
In short, make the consistency an explicit concern of your domain instead of it being solved implicitly through the infrastructure.
As it turns out, another potential solution is related to another question/concern that keeps popping up when working on a project that makes use of CQRS as a standalone pattern or in a combination with Event Sourcing.
Can write side query the read side?
According to Greg Young, the answer is
“yes, absolutely, there are cases when you simply have to”, and one of those cases is related to the challenge we mentioned here.
I’d argue that even if you don’t face the aggregates/entities problem we describe here, there are a lot of cases where it’s OK to query the read side (I have certainly done it more than once).
It’s fairly easy to spot these opportunities because in majority (if not all) of the cases they tend to present themselves in a form of a specification pattern, but since you are using Event Sourcing or CQRS as a standalone pattern, you can’t really make use of them.
But luckily for you, specifications and CQRS are two competing/interchangeable patterns.
Rule of thumb would be to aim for very small and specific projections which can provide you with a very specific answer to a very specific question which would be very cumbersome to implement by querying domain models.
Vaughn Vernon has a lot to say about aggregate design.
Check out his two-part Aggregate Design paper:
– Effective Aggregate Design Part I
– Effective Aggregate Design Part II
I also highly recommend you check out his DDD book:
Implementing Domain-Driven Design
DDD in a modern “always on” world
Software applications outside of the enterprise world, in general, have quite different requirements in terms of performance, latency, responsiveness, and scalability. There was a fear that applying DDD patterns to these kinds of domains would not really be feasible due to the aforementioned constraints and the overhead that OOP with DDD applied would introduce.
This might have been a great obstacle to widespread DDD adoption, but luckily, applying DDD to these kinds of domains gave birth to a new approach (the ideas were there for centuries actually) towards applying DDD under the name of Event Sourcing and CQRS.
In short, Event Sourcing and it’s complementary pattern, CQRS gave us an extravagant revamp of DDD building blocks, and a way to employ DDD patterns through the use of Domain Events as first-class citizens and the sole source of truth in these kinds of systems, while at the same time allowing us to satisfy high throughput / high availability needs of those kinds of systems, by enabling us to scale reads and writes separately, and employ inter-service integration via Domain Events.
Event Sourcing also provides a stepping stone towards implementing DDD in a functional world, due to the immutable nature of its event streams (An aggregate state is a left fold of all of its events).
But meeting scalability demands of large distributed systems is not the only benefit of employing this kind of event-centric approach towards modeling our systems.
There are a number of other benefits of employing an event-first modeling approach:
- Explicitly modeling important domain events and formalizing them, forces domain experts to think in terms of the behavior of their system instead of in terms of its structure. This especially helps, since people tend to think in terms of their legacy systems, instead of focusing on the problem they are really trying to solve.
- By modeling events, and focusing on the behaviors instead of nouns, even the domain experts get a different perspective on their domain and gain additional insight.
- Event modeling forces temporal focus and makes time become a crucial factor (which it is)
If any of these resonate with you I encourage you to check out Event Storming by Alberto Brandolini. Event Storming employs an event-centric approach in order to distill the domain.
I won’t go into detail here, but I will just mention that Event Storming has a number of different flavors you will find on the site and the book but, I would like to mention one more variant by Greg Young.
In his variant, you basically just take one single long-running business process end to end and model it using Event Storming in order to discover your service boundaries. This worked very well for me.
Event Sourcing / CQRS misconceptions/pain points
During one of his talks, Young focused on some recurring pain points/misconceptions that were coming up repeatedly regarding Event Sourcing and CQRS, and offered clarification and advice on how to approach these …. Here is a short recap.
CQRS is not a top-level architecture!
CQRS is a supporting pattern and you need to treat it as such. Don’t go crazy! Instead, apply it selectively to a few places.
Commands must return void?!
The bottom line is that it’s not about return values, it’s rather about side effects? It is perfectly OK for commands to return the list of errors example, instead of relying only on throwing exceptions (which is a bit of a bad practice anyway).
Command vs Domain Event is not strictly a one on one relationship!
An event does not necessarily have a corresponding command, nor does a command have exactly one corresponding event being published.
It is important to understand that there are always two sets of use cases. The commands coming in and the events coming out.
An event is not strictly a result of a command coming in. In event-centric architectures, this is commonplace to see, and the whole point of it is that events cause stuff to happen. A business process that resulted in publishing a domain event is often time triggered by another event without any commands.
There is no such thing as one way / async Command
The whole point of a command is that I have the ability to tell you No!
“Async” commands don’t really give you that option, you just fire and forget, which is kind of defeating the purpose. They do not work well in the real world.
By accepting a command, it should mean that you validated it, you can execute it, you processed it and it’s done. Otherwise, what you really want is a downstream event processor.
Don’t write a CQRS / Event Sourcing framework. Period.
I think we need to start realizing that you do not need a framework for everything. Frameworks have their place, but we need to put an end to framework-first mindset and instead, try to solve our problems with focused modules/libraries instead of relying on almost always “too generic” frameworks which tend to sprawl their tentacles all over our code base.
We need better examples!
As Greg Young puts it, we need better examples. Event Sourcing is hard, that’s a fact, and simple examples like simplified shopping cards don’t really do it justice.
Recommended training material:
Author: Anes Hasičić
This blog has also been published on Medium
Back in 2008 Robert C Martin proposed a fifth value for the Agile Manifesto: “Craftmanship over Execution”. With this value he wanted to tackle one of the main reasons why software fails, situations when teams execute but don’t care.
With this idea the goal is to draw attention to readability, stability and technical excellence. In order to accomplish this, it is often best to have a certain official development culture, with a set of standards and principles, in the team and company.
But that is not always the case. Management often refuses to allow certain practices to become official, but even in such cases, developers should not be stopped in applying them. After all, to become a professional software developer, one needs to start from oneself. Elevating yourself to higher standards can be done even if the surroundings don’t always set the path. There are a lot of ways to help you achieve high quality code without official permission. After all, to became a better developer, you need to take pride in what you do and have something to defend.
Here are three practices, mostly practiced in the enterprise world that can help you achieve better personal productivity:
Boy Scout Rule
Always leave things better than you found them. This rule applies in every part of life and, consequently, in software development. If you are adding a new feature, fixing a bug or refactoring always think how you can improve what you come across.
But be aware, don’t do big changes. If you think something should be improved, but would take a lot of time, propose it for refactoring (just by creating a proposal for refactoring means you are still tidying up the code). Do only smaller changes to what you are originally doing, something that takes couple of minutes, like improving logging message, a better exception handling, create couple more tests for edge cases, break bigger methods in two or more smaller methods, fix the method or test name to be more meaningful, etc.
These kinds of changes don’t take much time, but will drastically improve the quality over a longer period of time.
Test-Driven-Development is a way of designing software by writing test before you write code. The goal is to think about usage of your code before you write it and use tests to think about how it should work. If you do so, you will end up designing APIs and publicly available methods in classes to be cleaner and easier to use. Using some advanced techniques of practicing TDD will help you create highly abstract modules with very high test coverages. With that we come to another product of this approach which is that you will have a lot of automated acceptance tests, edge-case tests and negative test cases that cover your code. Third effect is, such tests can be used as documentation. As I said at the beginning, the goal is to use tests to design classes and modules better.
In practice I saw several bad implementations of TDD. In all of these cases it happened because management decided to introduce it as the official practice in the company. The problem is they thought of TDD as a way to increase the number of tests. And the developers that previously didn’t work using TDD often take that goal as the absolute truth.
They end up hating it because they see it as a push to do more tests. This would result in them refusing to write poor tests, and often write tests after the code, but “selling” it as they are doing TDD.
This is why it is very important to start using TDD and practice it before making it official. The initiative to make it official must come from developers not from management.
One of the biggest issues while working with large enterprise projects is complexity. With time, the technical debt increases, and the code base explodes in size. This makes handling that code harder with time, even when you need to introduce small changes. Even worse, if the code is not written in a way that it can be changed over time by someone who has not written it and there is no one to help them navigate it from the previous team in charge. One of the first goals you need to set is not to become that kind of a developer, who doesn’t care what will happen once they are not there to work or help with the code they wrote.
The best approach here is to always start small, and iterate small. Ignore big details, focus on the smallest details and do them one by one. Ship things early and ship often. Make sure it works and it is covered with tests. The best way to start is to apply the Single Responsibility Principle. The goal of which is to have methods and classes that do one and only one thing. This make them easy to test and easy to use.
By paying attention to detail while writing, you can make it very easy for others to read and understand. Limit yourself by making all methods you write no longer than 6 or 7 lines, and keep classes under a 100 lines. This does mean you will start slow, but you will finish fast. By using a mix of TDD and ADD, you can have acceptance integration tests that will give you the bigger picture and set you on the right path; but when you do small iterations, you write bulletproof classes and modules. Since the devil is in the details, you will stop yourself from producing bugs early on, so by the time you are done, you will drastically decrease the chances of having a lot of bugs, for the QA to have a lot of work and you will not lose a lot of time debugging and fixing edge case bugs by going through a lot of code. You lose much less time by working step by step than by fixing large bugs and edge case problems after writing a large sized code. In time you might find yourself using debugging tools very rarely or not at all!
Author: Adnan Isajbegović
Every person that had some connection with the IT sector knows about Agile. We all implement it in one way or another, but when something is as present as an Agile methodology, people tend to take it for granted. We now rarely ask ourselves about the effects of this methodology, since it became a norm.
I am not a developer or an engineer of any kind, but a psychologist that works as an HR manager. My perspective of agile processes is different as I focus on how it impact our thinking and behavior.
I cannot talk about whether all companies implement agile, but I can share with you my perspective of processes in the company Tacta. Here are five important effects that I observed:
Sense of achievement
Using test-driven development, makes each developer take full responsibility for the quality of the code. This approach switches one’s focus from solely finishing one’s part to making the code high-quality.
Still, from my perspective, the most important effect of being involved in an entire development cycle is that it gives developers a sense of accomplishment and fulfillment.
Feedback, feedback, feedback
Where would we be without feedback? The answer is probably stuck in our comfort zones. Doing different and new things is exciting, but it is often accompanied by a sense of insecurity, especially if we cannot see results right away. Each day we all encounter different kinds of mini choices and having someone to give us timely and specific feedback can be really helpful. Practices such as pair programming, daily meetings etc. are allowing exactly that.
Talking about challenges
In teams with more than one expert, there are a lot of different opinions. For me, transforming disagreement into a productive discussion is a special form of art and the key factors for achieving that are respecting your colleagues and being specific. In the end, joining forces to find a better solution instead of approaching the problem individually definitely makes it worthwhile. One of the great benefits of Agile is that it offers a lot of practices that facilitate discussion.
Openness to change
Authentic change happens not when we try to follow all the plans and rules in a rigid way, but when we become active listeners. Every developer that wants to make excellent software must be sensitive to clients’ and customers’ needs and wishes. This approach requires a lot of flexibility and Agile is all about flexibility. One thing that we all could do from time to time is ask ourselves if we are still open to change and improvements or we started implementing Agile just as a formality.
Support and challenge
It can be quite difficult to achieve the balance of support and challenge in our lives. When it comes to development, support and challenge are equally important.
In Tacta, support comes in a shape of feedback and ability to openly discuss your ideas while challenges come in a shape of interesting projects and new technologies.
Sometimes challenges are greater than support and vice versa, but what is important is that we all tend to be direct and open in communication and we could create the best possible environment if we work together.
Author: Mirna Dajanović