Hacker Newsnew | past | comments | ask | show | jobs | submit | stekern's commentslogin

I think the larger issue is how Kubernetes often is implemented in organizations - as part of internal developer platforms owned by central teams which on purpose or by accident can end up dictating how development teams should work. I think it's easy for such central teams to fall into the trap of trying to build smart, custom abstractions on top of Kubernetes to simplify things, but over time I believe these types of abstractions run a high risk of slowing down the rest of the org (good abstractions are really hard to come by!) and creating fuzzy responsibility boundaries between central and development teams. As an example, this can affect an organizational structure by (re-)introducing functional silos between development and operations. Can a development team really be fully responsible for what they build if they rely on high-level, custom abstractions that only someone else in the org really understands?

Furthermore, if everything in an org is containerized and runs on Kubernetes, it's really easy to have a strong bias towards containerized workloads, which in turn can affect the kind of systems you build and their architecture.


Can you name any systems which are possible by being non-containerized? What do you see as the advantage here?

It seems like a legacy view of the world that containers are at all worse. Today they seem to offer minimal overhead & access to all the same hardware capabilities as native apps.


(Author here)

The main focus of the post is to highlight some of the long-term risks and consequences of standardizing around Kubernetes in an org. If you've done a proper evaluation, and still think Kubernetes makes sense for you, then it's probably a sound decision. But I think many skip the evaluation step or do it hastily. The post is more targeted towards organizations with at least a handful of employees. What works for an indy dev does not necessarily scale and work for SMBs or larger orgs - those are very different contexts.

> The article suggests just using EC2 instead of K8s

Not quite. I suggest strongly considering using managed services when it makes sense for your organization. The equivalent of k8s in terms of managed services would be Amazon Elastic Container Service (ECS) as the control plane, perhaps with AWS Fargate as the compute runtime.

(I wouldn't really call EC2 a managed service - it's more in the territory of Infrastructure as a Service)


I may have misread ECS as EC2 and I apologise for that.

But the argument you make should certainly be applied to other managed services. AWS generally has opaque pricing, and significant hidden complexity - are you really going to just subscribe to ecs and fargate? Or are you subscribing to a bunch of other complexities like CloudWatch, IAM, EBS, etc etc? If I want to control costs then do I also need some third party service? How many IOPS does my database need, anyway?

I’m not an AWS user, because every time I’ve looked at it I’ve come away shaking my head at how complex everything is, and how much vendor specific technology I need to learn just to do something simple.

And, having run organisations with more than a handful of employees, if there’s anything I’ve learned it’s that simplicity is a virtue.

In fact, the last company I was involved with went all-in on AWS which involved formal training for everyone, very high costs, and multiple dedicated administrators. My part of the business pre dated that decision, and we did well over 10x the throughput with a single dedicated ops expert, using our own gear, orchestrated with docker-swarm. Our costs were literally 10% of the cost of AWS for the other part of the business, including amortisation of the hardware, and that’s before all the extra training and operational costs of AWS.

Today, it’s far easier to run K8s than it was to run swarm back then. So quite honestly, if you’re an Indy developer like me, K8s is almost a no brainer, and if you’re a mid sized SaaS shop, AWS is just a really great example of spending tens of thousands of dollars a month to say you’re running in AWS.


Not OP, but I'm using NextDNS -- the managed version of a PiHole. I have uBlock Origin and uMatrix in all my desktop browsers, and I'm seeing around 15% blocked queries on NextDNS. Most of the blocked traffic seem to come from my mobile devices.


> Soltani says he rarely recommends steps such as using ad blockers or VPNs for most people. They require too much attention and persistence to deliver on privacy, and even then they are limited in their effectiveness.

Decent article, but a bit light on easy-to-use, practical details. They seem to make the case that ad-blockers are ineffective and too much of a hassle, but that's not my experience at all. I agree that tracking and privacy is a cat-and-mouse game, but I think ad-blocking currently is one of the easiest and most effective ways to block trackers. I installed uBlock Origin on my family's computers a couple of years back, and it has Just Worked™.

The next logical step would probably be PiHole or NextDNS for deeper blocking than some browser ad-blockers, as well as for blocking outside of a browser. I have DoT set up on my router and most off my portable devices, and there haven't been any hiccups yet.

All bets are off once tracking is implemented server-side though. The way GDPR currently has been enforced has led to a lot of bad practices by the tech industry, so we'll probably need new laws that regulate data gathering.


Can't you use multi-stage builds to achieve this?


thank you, i've never seen this :)


For regular consumers, I'd say the biggest value proposition for Firefox on mobile, at least on Android, is the support for browser extensions such as uBlock Origin, and thus mobile ad-blocking.

As an example, there's currently no straight-forward way for the average Joe to set up ad-blocking on a mobile device (excl. DoH/DoT using PiHole, NextDNS, etc.). Many non-technical users are, however, familiar with the usage of ad-blocking extensions in their desktop browsers. They might not know the nitty gritty of how ad-blocking works, but most people seem to have one installed. Installing and using such an extension is just as seamless on Firefox for Android as it is on a regular computer.


Edge on both Android and iOS has Adblock built in and Safari on iOS can have ad blocking very easily enabled with a content blocking app like AdGuard. Firefox is arguably harder to set up for ad blocking than either of these.


Yeah, I was only speaking to Android in my post, and iOS is a different beast with different restrictions when it comes to browser implementations. Is Microsoft in any way competitive or innovative in the browser sphere? I haven't used Windows for ages so I'm out of the loop there.

If AdBlock is a built-in feature in Edge on Android, I assume you still need to opt-in through the settings (e.g., Settings -> Enable AdBlock), no? In which case the setup is Firefox is similarly easy: Addons -> uBlock Origin. Perhaps a bit less intuitive for an average user, but still very straight-forward to set up.


If you're using IaC and have it set up in a CI/CD pipeline, you could also achieve the same by having a cronjob set a flag outside of work hours, and use conditionals in your IaC based on the value of that flag (e.g., for Terraform `count = var.scale_down ? 0 : 1`)


There were a few reasons why I couldn't use IaC:

1) Using conditionals to achieve this usually makes terraform unmanageable specially if your terraform is complex.

2) Sometimes you'd need to do more complex steps to save some money, for example in case of ElastiCache you'd need to snapshot/delete/create.

3) Using terraform in a Lambda to schedule this was not straightforward (in my current company) so I gave up making it work :D


She may be oversimplifying certain aspects, and the examples may not be that practical, but my main take away from the article was that it can be advantageous to shift focus from taking control (by sheer force of will) in a situation where you've already succumbed to some kind of temptation, to avoid ending up in those kinds of situations by taking comparatively easier preventive measures earlier in the cycle.


I assume the parent means native support for vertical tabs in the browser as opposed to through a third party extension.


a.) Yes

b.) With changing their extension API several times it was not clear the plugin would make it.


Cool project!

I've recently created something similar for personal use. I have many websites (mainly webshops) I want to be notified about changes on, but they don't have RSS feeds, subscriptions or APIs than you can use.

I set up a cron job that runs daily, scrapes websites according to some XPaths, and saves the results to a DB. If any new elements have appeared, an email will be sent out. The biggest challenge is handling false positives: being able to distinguish between a new element and e.g., a previously seen element with an updated title, description etc. For websites that directly expose what seems to be unique, server-side, identifiers in their HTML, using that as a primary key seem to work well. If that's not available, the href of the HTML element seem to be fairly static.

Do you have any thoughts on the issue of false positives and unique identifiers?


Thanks! I haven't given the issue of unique identifiers too much thought because in most cases I assume the item URL is less likely to change than the text and will serve as the unique identifier for the RSS reader. It's possible to create feeds without item URLs in Feed Creator, so in those cases maybe letting users select an identifier to be the guid element in the feed would be helpful.

Generally though, I'm hoping users understand that feeds produced in this way could be a little more brittle than if the site offered its own feed.

One difference with your approach is that you have the data from previous fetches in your database. With Feed Creator everything related to producing the feed (source URL, selectors, filters, etc.) is embedded in the feed URL to avoid having to record data on the server. So each request is treated as if it's the first one - the server doesn't know if an item in the feed is new or old. If we referred to feed data from previous fetches, maybe we could let users introduce a delay before having a new item added to the feed. This might help in cases where a typo is spotted and corrected by the publisher minutes after publication. Can't think of a much better way of avoiding false positives at the moment though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: