Hacker Newsnew | past | comments | ask | show | jobs | submit | arter45's commentslogin

> but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

Not OP but I have some background on this.

An Erlang loss system is like a set of phone lines. Imagine a special call center where you have N operators, each of which takes calls, talks for some time (serving the customer) and hungs up. Unlike many call centers, however, they don’t keep you in line. Therefore, if all operators are busy the system hungs up and you have to explicitly call again. This is somewhat similar to a server with N threads.

Let's assume N=3.

Under common mathematical assumptions (constant arrival rate, time between arrivals modeled by a Poisson distribution, exponential service time) you can define:

1) “traffic intensity” (rho) has the ratio between arrival time and service time (intuitively, how “heavy” arrivals are with respect to “departures”)

2) the blocking probability is given by the Erlang B formula (sorry, not easy to write here) for parameters N (number of threads) and rho (traffic intensity). Basically, if traffic intensity = 1 (arrival rate = service rate), the blocking probability is 6.25%. If service rate is twice the arrival rate, this drops to 1% approximately. If service rate is 1/10 of the arrival rate, the blocking probability is 73.3%.

I will try to write down part 2 when I find some time.

EDIT - Adding part 2

So, let's add a buffer. We said we have three threads, right? Let's say the system can handle up to 6 requests before dropping, 1 processed by each thread plus an additional 3 buffered requests. Under the same distribution assumptions, this is known as a M/M/3/6 queue.

Some math crunching under the previous service and arrival rate scenarios:

- if service = arrival time, blocking probability drops to 2%. Of course there is now a non-zero wait probability (close to 9%).

- if service = twice the arrival time, blocking probability is 0.006% and there is a 1% wait probability.

- if service = 1/10 of the arrival time, blocking probability is 70%, waiting probability is 29%.

This means that a buffer reduces request drops due to busy resources, but also introduces a waiting probability. Pretty obvious. Another obvious thing is that you need additional memory for that queue length. Assuming queue length = 3, and 1 KB messages, you need 3 KB of additional memory.

A less obvious thing is that you are adding a new component. Assuming "in series" behavior, i.e. requests cannot be processed when the buffer system is down, this decreases overall availability if the queue is not properly sized. What I mean is that, if the system crashes when more than 4 KB of memory are used by the process, but you allow queue sizes up to 3 (3 KB + 3 KB = 6 KB), availability is not 100%, because in some cases the system accepts more requests than it can actually handle.

An even less obvious thing is that things, in terms of availability, change if you consider server and buffer as having distinct "size" (memory) thresholds. Things get even more complicated if server and buffer are connected by a link which itself doesn't have 100% availability, because you also have to take into account the link unavailability.


Every vertical?

Yes, we just launched our Shopify alternative. Next is Toast alternative for restaurants, MindBody alternative for gyms, etc. We’re going to leverage these platforms to build a decentralized marketplace around those verticals.

Nice! Is it a website, an app, a Jupyter notebook,...?


I’ve just asked that to ChatGPT and this is the exact answer it gave :) Ironic, isn’t it?

Would you ask a prospective employee to send you code they wrote in their previous company?

Short answer: *no, I wouldn’t—and it’s generally a bad idea to ask.*

Here’s why, and what to do instead.

---

## Why you shouldn’t ask for code from a previous employer

1. *Legal and contractual issues* Most employment contracts assign ownership of code to the company. Asking for it puts the candidate at risk of violating:

   * IP assignment clauses
   * NDAs
   * Trade secret laws
2. *Ethical red flag* A candidate who does share proprietary code is signaling that they might do the same to your company later.

3. *Biased signal* Some candidates (especially senior or well-trained ones) will refuse on principle. You’d be selecting for the wrong behavior.

4. *Low signal-to-noise ratio* Real-world code is often:

   * Highly contextual
   * Entangled with proprietary systems
   * Hard to evaluate without full context
---

## What to ask for instead (better options)

### 1. *Open-source contributions*

Ask for:

* GitHub / GitLab profiles * Public PRs or issues * Personal projects

This shows real-world collaboration without IP risk.

### 2. *A take-home or live exercise*

Design something that:

* Can be completed in a few hours * Mirrors the work they’ll actually do * Is clearly stated as theirs to own

Bonus: allow them to explain tradeoffs rather than just write code.

### 3. *Code walkthrough (descriptive, not shared)*

Ask them to:

* Describe a complex system they built * Explain architecture, tradeoffs, and failures * Talk through how they’d improve it today

This is often more revealing than code.

### 4. *Pair programming / problem discussion*

Observe:

* How they reason * How they ask questions * How they handle ambiguity

---

## When it might be acceptable

Only if *all* of the following are true:

* The code is explicitly non-proprietary * The candidate confirms they own the rights * It’s already public (e.g., open source)

Even then, I’d still be cautious.

---

## Bottom line

> *If a candidate shares proprietary code, that’s not a plus—it’s a liability.*

Strong hiring processes test:

* Judgment * Reasoning * Communication * Craft

—not their willingness to break an NDA.

If you want, tell me:

* The role (junior/senior, IC/lead) * The domain (backend, ML, frontend, systems)

…and I can help you design a fair, high-signal interview process.


Please don't. Thank you.

I intentionally asked ChatGPT not because this is what I always do, but specifically to highlight that even ChatGPT itself (artificial "intelligence") knows better than what its own company actually does.

I would never ask this of any employee (not that I have this power where I work), for both ethical and legal reasons.


Very nice!

>Then you didn't need Kubernetes in first place.

I think that's precisely the point the author was trying to make.


>In our Docker Compose world, this problem didn’t exist. Services ran where we told them to run.

This is really interesting.

One of the big selling points of Kubernetes is that it takes care of scheduling on its own, distributes replicas and so on. This is especially useful when you are autoscaling pods.

But when you don't need autoscaling, especially if you have a limited amount of microservices, you may as well deploy your applications on the nodes you want them to run on. And running a script on a single node or 3 doesn't really make a difference (even better if you can parallelize, but maybe it's not even necessary).

Yes you could do the same with a mix of labels and advanced scheduling configurations, but if this is the main (or only) reason you use Kubernetes, and you don't really need autoscaling, Docker Compose or something similar makes sense.


I have no idea about DeltaChat, but it's not surprising.

Social media and chat apps are successful because of the network effect.

Let's say I have ten friends. Nine of them use the same app exclusively and one uses another app.

If everyone want to stay in touch with all of them, the two most likely things are 1) the tenth friend migrates to the common app, 2) the tenth friend installs both apps. In both cases, most interactions use the common app, unless the tenth friend is so influential that everyone switches to the other app just for the sake of them.

When you want your app to be popular, you want to discourage people leaving your app for someone else and you want to encourage people to use yours (better features or, more commonly, a larger user base). As a result, unless there's any external force going in the opposite direction (regulations or just people really hating lock-in), you'll build your services so it's not that easy to leave. Better features are an alternative, but at some point new original features are hard to come by and can be expensive.

This is especially true if you want to make money through subscriptions, downloads or selling data, because the more users you have the more you earn, but it could also be true in free apps if the owner wants visibility.

Again, I don't know DeltaChat, but this is unfortunately the general trend.


Yes, it is an extreme example, but if your application(s) makes your company millions of dollars or euros, even if you are in a business that is not heavily regulated [1], mistakes or unavailability can cost a lot of money. Even if your company is not that big, mistakes in a crucial application everyone uses can cost time, money, even expose the company to legal trouble. "Self driving" coding in these situations is not ideal.

[1] Even if your domain is not traditionally considered heavily regulated (military, banking,...) there is a surprising amount of "soft law" and "hard law" in everything from privacy to accounting and much more.


>I'm always on the lookout for the failure modes, because those failure modes are going to be my employment opportunities going forwards.

Exactly! I don't have a lot of experience with coding via LLMs, but lately I've been dabbling with that outside of my job precisely to find these failure modes... and they actually exist :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: