Your startup has 4 engineers, 12 users, and 47 microservices. Now you spend more time debugging network calls than shipping features.
Friend's company went microservices. Had a Django monolith. Worked fine. Deploys took 3 minutes.
CTO read a Netflix blog post. Two sprints later: five repos for what used to be five Django apps. User service, auth service, email service, notification service, and of course an API gateway to tie it all together.
Deploys went from 3 minutes to 45. They hired a DevOps engineer just to babysit Kubernetes. A junior dev can't add a feature anymore because it touches three services and she doesn't have access to all the repos.
Three months in, they solved zero scaling problems. Created fifteen operational ones.
Let me translate some common justifications.
"Netflix does it." Real reason: resume padding. Reality: Netflix has 2,000 engineers. You have four.
"We need to scale." Real reason: FOMO. Reality: Your 500 users could be served by a Raspberry Pi.
"Independent deploys." Real reason: one team broke prod last week. Reality: Fix your tests instead of restructuring your entire architecture.
"Technology freedom." Real reason: someone wants to write Rust. Reality: You have three people. You don't need three languages.
In a monolith, you call a function:
In microservices, the same operation becomes an adventure:
Now multiply this by every service call. Add service discovery, load balancing, TLS certificates, auth tokens between services. Every service boundary is a new way for things to break.
A monolith can fail because the database is down, you ran out of memory, or there's a bug. That's it.
Microservices can fail for all those reasons, plus: network partitions, services can't find each other, timeouts cascade, circuit breakers trip when they shouldn't, message queues back up, distributed transactions fail halfway, and data gets inconsistent across services in ways that are nearly impossible to debug.
Here's what actually works for most teams: one codebase, one deploy, with clear internal boundaries.
Modules communicate through function calls with contracts:
Type-checked. Instant. Reliable. No network. No timeouts. No service discovery.
And here's the kicker: if you later decide you really do need to extract billing into its own service, the boundary is already there. Just swap the import for an HTTP client.
90% of the modularity benefits. 0% of the distributed systems tax.
Shopify runs a modular monolith. Handles Black Friday. If they don't need microservices for one of the highest-traffic e-commerce events in the world, you probably don't need them for your startup.
You might actually need microservices if:
You probably don't need them if:
Build for your scale today. Not the scale you dream about.
— blanho
Everyone talks about clean code. Nobody talks about the feature that never shipped because someone was too busy renaming variables.
Synchronous calls work until they don't. Then you need a message queue. Here's why.
The hidden state in your servers is why you can't just 'add more boxes'.
user = get_user(user_id) # 0.1ms, never fails, type-checkedtry:
response = requests.get(
f"http://user-service/users/{user_id}",
timeout=5,
headers={"X-Request-ID": trace_id}
)
response.raise_for_status()
user = response.json()
except Timeout:
# Retry? Circuit breaker? Fallback?
logger.error("User service timeout", extra={"trace_id": trace_id})
except ConnectionError:
# Service down? Network issue? DNS problem?
pass# Modular Monolith structure:
app/
├── modules/
│ ├── users/
│ │ ├── api.py # Public interface
│ │ ├── service.py # Business logic
│ │ └── models.py # Internal models
│ ├── billing/
│ │ ├── api.py
│ │ └── ...
│ └── notifications/
│ └── ...
└── main.pyfrom modules.users.api import get_user
from modules.billing.api import charge