From Monolith to Microservices: A Practical Guide for Developers
From Monolith to Microservices: A Practical Guide for Developers
When I started building a Medium-inspired blogging platform as part of my learning journey, I faced a critical architectural decision: should I build a monolith or embrace microservices from day one? This post chronicles that journey and shares practical insights for developers facing the same question.
The Monolith Dilemma
My initial design was straightforward: a single Spring Boot application handling users, articles, comments, and media. It worked well for development, but as requirements grew, several problems emerged:
Challenges I Encountered:
- Tight Coupling: Changing the user authentication system required touching article and comment modules
- Scaling Issues: Heavy image processing was slowing down article API responses
- Deployment Risk: A bug in the comment feature could take down the entire application
- Team Bottlenecks: Multiple developers couldn't work independently without merge conflicts
When to Choose Microservices
Before diving into microservices, ask yourself these questions:
✅ Good Reasons to Use Microservices:
- Independent Scaling: Different components have vastly different load patterns
- Team Autonomy: Multiple teams need to deploy independently
- Technology Diversity: Different services benefit from different tech stacks
- Fault Isolation: Failures in one area shouldn't cascade to the entire system
- Clear Bounded Contexts: Your domain has well-defined, separable components
❌ Bad Reasons to Use Microservices:
- "It's the modern way to build applications"
- "Netflix does it, so should we"
- Your application has fewer than 10 users
- Your team has fewer than 5 developers
- You don't have DevOps expertise
My Decision: Since this was a learning project meant to demonstrate scalability and modern architecture, microservices made sense despite the added complexity.
Architecture Design
I decomposed the monolith into five core services:
``` ┌─────────────────┐ │ API Gateway │ ← Entry point, routing, authentication └────────┬────────┘ │ ┌────┴────┐ │ │ ▼ ▼ ┌────────┐ ┌──────────┐ ┌───────────┐ ┌────────────┐ │ Users │ │ Articles │ │ Media │ │ Comments │ │Service │ │ Service │ │ Service │ │ Service │ └────┬───┘ └────┬─────┘ └─────┬─────┘ └──────┬─────┘ │ │ │ │ └──────────┴──────────────┴──────────────┘ │ ┌─────▼─────┐ │ Kafka │ ← Event bus └───────────┘ │ ┌─────▼─────┐ │ Eureka │ ← Service discovery └───────────┘ ```
Service Boundaries
1. User Service
- Responsibilities: Authentication, user profiles, authorization
- Database: PostgreSQL (relational data with strict consistency)
- Communication: Synchronous REST APIs + Events for user creation/updates
2. Article Service
- Responsibilities: Article CRUD, publishing, drafts
- Database: MongoDB (flexible schema for rich content)
- Communication: REST APIs + Events when articles are published
3. Media Service
- Responsibilities: Image/video upload, processing, CDN integration
- Database: S3 for storage + Redis for metadata caching
- Communication: REST APIs + Events when processing completes
4. Comment Service
- Responsibilities: Comments, replies, moderation
- Database: MongoDB (nested comment threads)
- Communication: REST APIs + Events for new comments
Implementation: The Technical Details
1. Service Discovery with Eureka
Instead of hardcoding service URLs, I used Netflix Eureka for dynamic service registration:
```java // bootstrap.yml in each service spring: application: name: article-service cloud: config: discovery: enabled: true
eureka: client: serviceUrl: defaultZone: http://localhost:8761/eureka/ instance: preferIpAddress: true leaseRenewalIntervalInSeconds: 10 ```
```java // ArticleServiceApplication.java @SpringBootApplication @EnableDiscoveryClient public class ArticleServiceApplication { public static void main(String[] args) { SpringApplication.run(ArticleServiceApplication.class, args); } } ```
Benefit: Services find each other automatically. Adding new instances is seamless.
2. API Gateway with Spring Cloud Gateway
The gateway handles routing, authentication, and rate limiting:
```java @Configuration public class GatewayConfig { @Bean public RouteLocator customRouteLocator(RouteLocatorBuilder builder) { return builder.routes() .route("article-service", r -> r .path("/api/articles/**") .filters(f -> f .stripPrefix(1) .circuitBreaker(config -> config .setName("articleServiceCircuitBreaker") .setFallbackUri("forward:/fallback/articles"))) .uri("lb://ARTICLE-SERVICE"))
.route("user-service", r -> r
.path("/api/users/**")
.filters(f -> f
.stripPrefix(1)
.requestRateLimiter(config -> config
.setRateLimiter(redisRateLimiter())))
.uri("lb://USER-SERVICE"))
.build();
}
} ```
3. Event-Driven Communication with Kafka
For asynchronous operations, I used Apache Kafka:
```java // User Service - Publishing events @Service public class UserEventPublisher { @Autowired private KafkaTemplate<String, UserEvent> kafkaTemplate;
public void publishUserCreated(User user) {
UserEvent event = new UserEvent(
user.getId(),
user.getEmail(),
EventType.USER_CREATED,
Instant.now()
);
kafkaTemplate.send("user-events", user.getId(), event);
}
} ```
```java // Article Service - Consuming events @Service public class UserEventConsumer { @KafkaListener(topics = "user-events", groupId = "article-service") public void handleUserEvent(UserEvent event) { if (event.getType() == EventType.USER_CREATED) { // Create author profile in Article Service's database authorRepository.save(new Author( event.getUserId(), event.getEmail() )); } } } ```
Benefit: Services remain decoupled. If Article Service is down, User Service can still create users. Events are persisted in Kafka and processed when services come back online.
4. Circuit Breaker Pattern with Resilience4j
To handle service failures gracefully:
```java @Service public class ArticleClient { private final RestTemplate restTemplate; private final CircuitBreaker circuitBreaker;
@CircuitBreaker(name = "articleService", fallbackMethod = "getArticleFallback")
@Retry(name = "articleService", fallbackMethod = "getArticleFallback")
public Article getArticle(String articleId) {
return restTemplate.getForObject(
"http://ARTICLE-SERVICE/articles/" + articleId,
Article.class
);
}
private Article getArticleFallback(String articleId, Exception e) {
// Return cached data or degraded response
return articleCache.get(articleId)
.orElse(new Article(articleId, "Article temporarily unavailable"));
}
} ```
Configuration: ```yaml resilience4j.circuitbreaker: instances: articleService: registerHealthIndicator: true slidingWindowSize: 10 minimumNumberOfCalls: 5 permittedNumberOfCallsInHalfOpenState: 3 automaticTransitionFromOpenToHalfOpenEnabled: true waitDurationInOpenState: 5s failureRateThreshold: 50 ```
5. Distributed Tracing with Sleuth
To debug issues across services:
```xml org.springframework.cloud spring-cloud-starter-sleuth org.springframework.cloud spring-cloud-sleuth-zipkin ```
Now every log includes a trace ID: ``` 2024-10-28 10:15:23.456 INFO [article-service,abc123,xyz789] ArticleController: Creating article 2024-10-28 10:15:23.789 INFO [user-service,abc123,def456] UserService: Validating author ```
The same trace ID (abc123) links requests across services!
Challenges and Solutions
Challenge 1: Data Consistency
Problem: When creating an article, I needed to verify the user exists. But user data lives in a different service.
Solution: Implemented eventual consistency with event sourcing:
- Article Service subscribes to "user-created" events
- Maintains a local cache of author IDs
- Validates against this cache (fast, no network call)
- Falls back to User Service API if cache miss
Challenge 2: Distributed Transactions
Problem: What if article creation succeeds but comment initialization fails?
Solution: Used the Saga pattern with compensating transactions: ```java @Service public class ArticleCreationSaga { public void createArticle(ArticleRequest request) { String articleId = null; try { // Step 1: Create article articleId = articleService.create(request);
// Step 2: Initialize comment thread
commentService.initializeThread(articleId);
// Step 3: Publish event
eventPublisher.publishArticleCreated(articleId);
} catch (Exception e) {
// Compensate: rollback previous steps
if (articleId != null) {
articleService.delete(articleId);
}
throw new SagaExecutionException("Failed to create article", e);
}
}
} ```
Challenge 3: Testing
Problem: Integration tests now require running 5 services + Kafka + Eureka.
Solution:
- Unit Tests: Test each service in isolation with mocked dependencies
- Contract Tests: Use Pact to verify service interactions
- Component Tests: Test each service with in-memory Kafka and embedded databases
- E2E Tests: Docker Compose to spin up entire stack for critical paths only
Deployment with Docker
Each service gets its own Dockerfile:
```dockerfile FROM openjdk:17-jdk-alpine WORKDIR /app COPY target/article-service-1.0.0.jar app.jar EXPOSE 8080 ENTRYPOINT ["java", "-jar", "app.jar"] ```
Docker Compose for local development:
```yaml version: '3.8'
services: eureka: build: ./eureka-server ports: - "8761:8761"
api-gateway: build: ./api-gateway ports: - "8080:8080" depends_on: - eureka
article-service: build: ./article-service deploy: replicas: 2 # Multiple instances depends_on: - eureka - kafka
kafka: image: confluentinc/cp-kafka:latest ports: - "9092:9092" ```
Results and Learnings
The Good ✅
- Independent Scaling: Scaled Media Service to 5 instances during high load, kept other services at 1 instance
- Fault Isolation: Bug in Comment Service didn't affect article publishing
- Technology Freedom: Used MongoDB for articles, PostgreSQL for users, Redis for caching
- Faster Development: Teams could deploy independently without blocking each other
The Bad ❌
- Increased Complexity: 5× more codebases to manage
- Network Latency: Inter-service calls added 50-100ms overhead
- Debugging Difficulty: Tracing issues across services was challenging
- Higher Operational Cost: Running 5 services + infrastructure costs more
Key Takeaways
- Start with a Monolith: Build a well-structured monolith first. Microservices can be extracted later.
- Domain-Driven Design is Critical: Bad service boundaries create more problems than they solve.
- Invest in Observability: You can't manage what you can't measure. Logging, tracing, and metrics are essential.
- Automate Everything: CI/CD, deployments, and rollbacks must be automated.
- Communication is Key: Synchronous vs asynchronous, REST vs events - choose wisely for each use case.
When Should YOU Use Microservices?
Use microservices if:
- ✅ Your team has 10+ developers
- ✅ Different parts of your app have wildly different scaling needs
- ✅ You need to deploy features independently
- ✅ You have strong DevOps practices in place
Stick with a monolith if:
- ❌ You're building an MVP or proof of concept
- ❌ Your team is small (< 5 developers)
- ❌ You don't have experience with distributed systems
- ❌ Your organization isn't ready for the operational overhead
Conclusion
Microservices are a powerful architectural pattern, but they're not a silver bullet. They introduce significant complexity that must be justified by real business needs. For my learning project, they provided invaluable experience with distributed systems, but for many production applications, a well-designed monolith is the right choice.
The best architecture is the one that solves your current problems without creating bigger ones.
Want to discuss microservices architecture? Connect with me on LinkedIn or check out the source code on GitHub.