Hacker Newsnew | past | comments | ask | show | jobs | submit | inian's commentslogin

Supabase | https://supabase.com/ | Remote

Supabase is the Postgres Development Platform and we are looking for Product Managers and Technical Program managers. You will be working with very strong Product Engineers across a wide variety of products (Postgres, realtime, storage, Queues, etc). If you enjoy working on developer tools and like to get your hands dirty, check out our open product roles

- Product Manager https://jobs.ashbyhq.com/supabase/74542052-f648-48fb-a8fe-a8...

- Technical Program Manager https://jobs.ashbyhq.com/supabase/b83c7316-77ce-49a8-a199-9f...

We are also hiring for other engineering and growth roles - https://supabase.com/careers


This is indeed pretty cool. They also have the `aws_s3` extension [1] for doing the same thing inside Postgres. Unfortunately, the extension isn't open source.

[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_...


we might be able to achieve the same thing now with our S3 Wrapper:

https://supabase.github.io/wrappers/s3/


We don't support S3 event notifications directly, but you achieve similar functionality by using Database Webhooks [1]. You can trigger any HTTP endpoint or a Supabase Edge function by adding a trigger to the objects table [3] in the Storage schema.

[1]: https://supabase.com/docs/guides/database/webhooks [2]: https://supabase.com/docs/guides/functions [3]: https://supabase.com/docs/guides/storage/schema/design


The Supabase CLI [1] provides a way for you to manage functions, triggers and anything else in your Postgres database as a migration. These migrations would be checked into your source control.

You can then take it a step further but opting-in to use Branching [2] to better manage environments. We just opened up the Branching feature to everyone [3].

[1]: https://supabase.com/docs/guides/cli/local-development#datab... [2]: https://supabase.com/docs/guides/platform/branching [3]: https://supabase.com/blog/branching-publicly-available


Thanks for the response. I don’t think I was super clear about what I meant - I’m more talking about the following scenario:

Let’s say that we are using the Typescript sdk to make our app and need to do some fancy aggregation on a table that isn’t supported by Postgrest natively (specifically we can’t retrieve the data with Postgrest out of the box with its typical setup). Postgrest tells us that in this case we can do two things: create a view or make a Postgres function. Each has their pros and cons, but with either choice now we have this problem: some of our business logic is in sproc/function/view and some of it is in Typescript. In a typical setup using an ORM it would all be in Typescript.

The conventional wisdom is that db’s are dumb state holders and all of the business logic goes in the app - Supabase attempts to turn this on its head and say no actually it’s ok to store business logic in the db. But now if we do that we have a new problem: we don’t have a blessed path forward for where the line is on what goes where anymore. We don’t have good patterns for storing and managing this so other developers understand where to put things and how to work with our app anymore, because it no longer holds the principle of least astonishment. That’s what I mean by framework in this context.

Maybe all that is necessary here is a battle tested example project that demonstrates the “correct” way to make this demarcation. But as-is it steers me away from using Supabase for more complex projects if I even think they will need something that Postgrest won’t support without making a view or sproc/function


The S3 API reference [1] is closest to a formal spec there is. The request, response and the error codes are pretty well documented.

[1]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operatio...


Thanks for this. Do you know how are redirects handled?

I searched on that page and didn't find anything but I've seen it mentioned elsewhere that they are catered for but I haven't found any documentation for that.

I'm particularly interested in the temporary ones - 302, 303 and 307.


perhaps you could open source the test suite as a standalone thing so that other s3 compatible apis can prove they match the supabase standard


There is no per request pricing.


I agree that s3 compatibility is a bit of a moving target and we would not implement any of the AWS specific actions.

We are transparent with what's the level of compatibility - https://supabase.com/docs/guides/storage/s3/compatibility

The most often used APIs are covered but if something is missing, let me know!


We have discussed this internally before since we have seen some users delete the metadata in the storage schema and expect the underlying object to be deleted too and if we should convert our entire storage server to just be a Postgres extension.

The source of truth also matters here - if it's the database or the underlying s3 bucket. I think having the underlying storage bucket to be the source of truth would be more useful. In that scenario we would sync the metadata in the database to match what's actually being stored and if we notice metadata of a object missing, we add that in as opposed to deleting the object in storage. This would make it easier for you to bring in your own s3 bucket with existing data and attach it to Supabase storage.


This falls in line with how SQL Server did its FileStream stuff, but it was so clunky nobody used it except for some madmen.


We are hosted on aws and are just passing the cost over to our users. We make no margin on egress fees. Deploying storage on other clouds including Fly.io is planned.

We are actively working on our Fly integration. At the start, the pricing is going to be exactly the same as our hosted platform on aws - https://supabase.com/docs/guides/platform/fly-postgres#prici...


Thanks. So a user that only wants pg has to pay for storage etc?


Here is the example of the DuckDB querying parquet files directly from Storage because it supports the S3 protocol now - https://github.com/TylerHillery/supabase-storage-duckdb-demo

https://www.youtube.com/watch?v=diL00ZZ-q50


Yes. Duckdb works very well with parquet scans on s3 right now.


Does it work well with Hive tables storing parquet files on s3?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: