How to split a monolith and legacy code via code-sketches

Legacy-code monoliths are one of these challenges which lead often to controversy discussions and questions like:

  • What are the pros and cons of getting rid of the code-base?
  • How to get rid of it, via a big-bang or splitting it step by step?
  • What are the efforts?

In this post I’d like to describe a way how to create the first code-sketch which could lead to an insight into the possibilities to tackle the beast step by step. Weiterlesen „How to split a monolith and legacy code via code-sketches“

Application and system sketching based on code templates

book-731199_1280On almost every desk on this planet lies a pen and a piece of paper for notes. In the programmers or software-architects world notes are especially technical notes about a component or system – drawn e.g. as UML-sketches on a paper or via an UML tool. However a few minutes and discussions later the names of the UML-boxes are typed into an IDE as part of source-code that will be written during the next days, weeks, months… Weiterlesen „Application and system sketching based on code templates“

Prototype sketching with Apache Camel

Almost every new project contains legacy code, legacy components or systems. Besides the business-logic a lot of boilerplate code has to be created; only to realize the integration of the legacy parts. Additionally at the beginning of a project it is not clear if the chosen approach works to integrate the legacy components. Coding different scenarios to integrate several technologies is not done within a few hours, more likely in a couple of days. I.e an architecture or integration prototype would be helpful to verify the ideas and figuring out possible issues and challenges up front. Weiterlesen „Prototype sketching with Apache Camel“

A coffee with Kafka

A few months ago I got in touch with Kafka – the message broker Apache Kafka rather than Franz Kafka the novelist.
Apache Kafka is simply a message broker which claims to be:

  • Fast – Hundreds of MB r/w per second from thousands of clients
  • Scalable
  • Central data backbone – Data streams are partitioned and spread over a cluster of machines
  • Durable – Messages are persisted on disk and replicated within the cluster
  • Fault-tolerance – Cluster-centric design

Weiterlesen „A coffee with Kafka“