Resist Vendor Lock-In With Supabase

Resist Vendor Lock-In With Supabase

Not only is Supabase the open-source Firebase alternative, it just might be safer too.

In one of his recent YouTube videos, Theo Browne highlights the potential pitfalls that comes with using a Platform-as-a-Service.

The video goes over recent work done to fuzz suspected Firebase-backed services looking for leaked credentials. As is shown in the video, this is an all-too common pattern in these platforms, where the "backend" is being directly manipulated from client-side SDKs, such that theres a high likelihood that credentials are carelessly exposed to the client. In addition to potential secret leakage, many of these platforms, including Firestore, do not have great default security settings. This has obvious MAJOR security concerns is a horrible practice for the industry to standardize on, but yet these platforms still remain extremely popular despite known security concerns due to their ease of use and how they enable rapid development.

In addition to these security concerns, Firebase has long drawn the ire of many developers due to its tight coupling with Google's proprietary ecosystem, and its cost structure which has lead to more than a few horror stories about unexpectedly high bills (although these issues can often be attributed to poor database practices that lead to exponentially ballooning costs). With much apprehension towards Firebase, many looked for alternatives, and found themselves needing to piece together a solution from many different parts. Here enters Supabase, which pitches itself as the "open-source Firebase alternative." Supabase offers similar solutions as Firebase, such as a realtime database, object storage, user authentication and management, edge functions, and much more. Supabase, however, is built entirely from open-source software (in fact, they stipulate that Supabase will only ever include software that comes with an MIT, Apache 2, or equivalent license).

When asked about his thoughts on Supabase, Theo rightly points out that these problems still exist when using their client SDKs, but he stresses that unlike Firebase, since Supabase is built entirely on open-source software (namely PostgreSQL) you not only can connect directly to the database, but it is actively encouraged by the CEO himself.

Wow! What a breath of fresh air that is! And as it turns out, even when "just" using Supabase in this way, it is still great! You get an authentication table with Row-Level-Security and robust security practices out of the box, access to their wonderful front-end database management, and the ability to easily integrate their other features such as Edge Functions, Object Storage, all while maintaining tight control over your backend. It just makes sense! It offers the rapid development experience that is all too valuable, while not compromising on security.

Still, you might be wondering what this looks like from the developer perspective. Surely if you're writing directly to the database this must be more unwieldily than using those client SDKs right? Well, not necessarily!

Supabase still offer PostgREST out of the box, which exposes a REST API for database operations, but you have control over where you fire off those requests from. And since you have complete control over your database, you can even do a hybrid approach, which is what I prefer. Since Supabase takes care of the auth.users table (and even makes it read-only through the web frontend), you might want to just use the REST APIs they expose to handle authentication, but then handle everything else yourself.

💡
If you want to have a table NOT be exposed as an API through PostgREST, just add it to a different schema from the public schema. All tables under the public schema will be exposed.

Even though you might want Supabase to handle authentication, you still might want to extend features from that users.auth table. Since they discourage modifying that table directly, the suggested approach is to make a table in the public schema (which is the default schema – the auth schema is managed by Supabase) to manage any additional information related to the user. The documentation goes into much more detail about Managing User Data, but that is the gist of the situation. Well, if you want this new table (let's call it Profiles) to stay in sync with the auth.users table, it's best to have the Profiles table refer to the auth.users table, and add a Postgres trigger to run a function to create a new entry in the Profiles table whenever a new user is registered.

💡
It's also worth noting that in their recent General Availability Launch Week, Supabase also announced new efforts to increase security practices across projects, including a Postgres Linter and new Security Advisor and Performance Advisor dashboards to help you maintain good security posture.
GitHub - supabase/splinter: Supabase Postgres Linter
Supabase Postgres Linter. Contribute to supabase/splinter development by creating an account on GitHub.
Supabase Security Advisor & Performance Advisor
We’re making it easier to build a secure and high-performing application.

Let's walk through an example of what this might look like using my favorite backend language, Elixir (it's also a favorite of Supabase themselves). I'll be using Elixir's very nice ORM, Ecto.

One you make your new project (using mix new or perhaps more commonly in this situation, mix phx.new) you'll want to make sure you have Ecto as a dependency, and then you'll want to model the auth.users table as well as the new public.profiles table.

defmodule User do
  @moduledoc """
  This schema represents the default Supabase users table, which is under the 'auth' schema.

  Since we don't actually manage this schema, we will not make any migrations for it.

  This is mainly for convenience when unmarshalling data and working with users, so we
  can refer to the User struct rather than a generic map.

  Notice that we specify the primary key which will be referred to later

  Ecto schemas do not have to have 1-to-1 fields match the table in the database, so we can use whatever minimal fields we want to mirror in the profiles table (`id` at a minimum).
  """
  use Ecto.Schema
  import Ecto.Changeset

  # id is a UUID
  @primary_key {:id, :binary_id, autogenerate: false}
  schema "auth.users" do
    field :created_at, :naive_datetime_usec
    field :updated_at, :naive_datetime_usec
  end

end
defmodule Profile do
  @moduledoc """
  This schema holds extra information about users
  """
  use Ecto.Schema
  import Ecto.Changeset

  schema "profiles" do
    field :first_name, :string
    field :last_name, :string

    embeds_one :settings, Settings do
      field :default_portfolio, :string
      field :theme, Ecto.Enum, values: [dark: "Dark", light: "Light", system: "System"]
    end

    # This is the most important line and the one that is required to
    # properly link this table to `auth.users`
    # Make sure to set the type to :binary_id, which is what the Supabase 
    # auth uses
    belongs_to :user, User, type: :binary_id, references: :id, primary_key: true

    timestamps()
  end
end

Now we would create the necessary migrations, starting with creating the public.profiles table.

defmodule CreateProfiles do
  use Ecto.Migration

  def change do
    create table(:profiles, primary_key: false) do
      # notice the `prefix` since `public` is the default prefix
      # notice the specifying the type to match the Supabase defaults
      # make sure to set this as the primary key
      add :id, references(:users, on_delete: :delete_all, prefix: "auth", type: :uuid),
        primary_key: true

      # These fields should match what you have in your schema
      add :first_name, :string
      add :last_name, :string
      add :settings, :map

      # This represents the `inserted_at` and `updated_at` fields in the 
      # schema, and are required by default
      timestamps()
    end

    # You might also want to add indexes to improve performance and ensure data integrity
    create index(:profiles, [:id])
  end
end

Now we add a migration to add the trigger:

defmodule CreateProfilesTrigger do
  use Ecto.Migration

  def up do
    # Function to insert a new profile
    execute """
    CREATE OR REPLACE FUNCTION public.create_profile_for_new_user()
    RETURNS TRIGGER AS $$
    BEGIN
      INSERT INTO public.profiles (id, inserted_at, updated_at)
      VALUES (NEW.id, now(), now());
      RETURN NEW;
    END;
    $$ LANGUAGE plpgsql SECURITY DEFINER;
    """

    # Trigger to call the function after a user is inserted
    execute """
    CREATE TRIGGER trigger_create_profile_after_user_insert
    AFTER INSERT ON auth.users
    FOR EACH ROW
    EXECUTE FUNCTION public.create_profile_for_new_user();
    """
  end

  def down do
    execute "DROP TRIGGER IF EXISTS trigger_create_profile_after_user_insert ON auth.users;"
    execute "DROP FUNCTION IF EXISTS create_profile_for_new_user;"
  end
end

And that's it, you now have extended the ability to store information for the users while keeping the security provided by the protected auth.users table. But now how do you use it?

Well, let me show you how in fewer than 90 lines of code:

defmodule UserManagement do
  @req Req.new(
         base_url: Application.compile_env(:myapp, [:supabase, :base_url]),
         headers: [apikey: Application.compile_env(:myapp, [:supabase, :api_key])],
         url: "/auth/v1/:action"
       )

  def get_current_user(bearer_token) do
    Req.get!(
      @req,
      auth: {:bearer, bearer_token},
      path_params: [action: "user"]
    )
    |> Map.get(:body)
  end

  def signup_with_username_and_password(email, password) do
    Req.post!(
      @req,
      path_params: [action: "signup"],
      json: %{email: email, password: password}
    )
    |> Map.get(:body)
  end

  def login_with_email_and_password(email, password) do
    Req.post!(
      @req,
      path_params: [action: "token"],
      params: [grant_type: "password"],
      json: %{email: email, password: password}
    )
    |> Map.get(:body)
  end

  def send_password_recovery_email(email) do
    Req.post!(
      @req,
      path_params: [action: "recover"],
      json: %{email: email}
    )
    |> Map.get(:body)
  end

  def update_user(bearer_token, data \\ %{}) do
    {email, data} = Map.pop(data, "email")
    {password, data} = Map.pop(data, "password")

    body = %{
      "data" => data
    }

    body = if email, do: Map.put(body, "email", email), else: body
    body = if password, do: Map.put(body, "password", password), else: body

    Req.put!(
      @req,
      path_params: [action: "user"],
      auth: {:bearer, bearer_token},
      json: body
    )
    |> Map.get(:body)
  end

  def logout(bearer_token) do
    Req.post!(
      @req,
      auth: {:bearer, bearer_token},
      path_params: [action: "logout"]
    )
    |> Map.get(:body)
  end

  def send_email_invite(bearer_token, email) do
    Req.post!(
      @req,
      auth: {:bearer, bearer_token},
      json: %{email: email},
      path_params: [action: "invite"]
    )
    |> Map.get(:body)
  end
end

Pretty simple right? You could surely condense this more too if you so choose.

💡
The wonderful Req library takes care of a lot of the tedious parts of these requests, such as JSON-encoding and Bearer authentication. I highly recommend it over other HTTP clients for these reasons.

As long as you set your API key and Supabase instance URL into the application environment, this will have you ready to perform all of your user management tasks, and upon registration have the new user reflected in the auth.users table as well as the public.profiles table.

And lastly, make sure you connect to your database using the connection string (or you can specify each field if you'd like).

import Config

config :myapp, MyApp.Repo,
  url:
    "myconnectionstring",

This barely scratched the surface of what you can do with Supabase, but I hope it at least demonstrates how quick it is to get started with it and how flexible it is to have complete control over your stack.

Of course this example was in Elixir, but you could extend this to any other backend language and get the same benefits, and avoid falling victim to vendor lock-in!

Comments