This post is now deprecated

This post might have outdated content and is no longer maintained.

Prisma — Cloud Native GraphQL Database API Layer — Deep Dive

Cloud Native — GraphQL — Database API Layer
A futuristic Cloud Native — GraphQL Database API Layer
A futuristic Cloud Native — GraphQL Database API Layer

P.S. This post is intended to be a giant deep dive into Prisma. If you think any part is missing OR is not clear OR can be improved in any way. Reach out to me via comments or tweet to me.

Table of Contents

  1. Introduction
  2. Setup
  3. Prisma init Command Breakdown
  4. Directory Structure
  5. Run the Application Server
  6. Access the database
  7. Teach a Man How to Fish
  8. Permissions
  9. Advanced Queries
  10. Advanced Mutations
  11. Subscriptions
  12. Export/Import Data
  13. Monitoring
  14. Advanced Prisma and GraphQL Stuff
  15. Gotchas
  16. Plugs
  17. Further Reading

Introduction

This post talks about one of the most fascinating GraphQL tools I have come across.

What is Prisma?

Prisma is a tool (read product that has evolved after a lot of production

) that converts your database into a very powerful GraphQL API.

Think of Prisma as a “cloud native”, “GraphQL first” database but it is more than that as we will see.

Prisma is at the frontier of a very powerful GraphQL ecosystem (https://www.prismagraphql.com/docs/graphql-ecosystem/).

Where does Prisma sit in my setup?

What does (can) Prisma do?
What does (can) Prisma do?

You can be in following situations:

You are starting from scratch Prisma can be used to automagically build tables and a powerful GraphQL API on top of it (this is what we will majorly cover in this post). You can then import your data into Prisma and be merry.

Prisma can already do this with very effectively with MySQL and others (MongoDB, ElasticSearch, Postgres etc) are coming soon.

You have an existing database but you want to expose it via a GraphQL API

Either you can also do this task at the application server level or Prisma can act as a thin GraphQL wrapper on top of your existing database.

You rely on 3rd party GraphQL and REST APIs for information

At the application server level you can use REST API as the source of data. The community is actively working on tools to make this experience much better.

Any combination of the above 3

Prisma is not limited to one setup, it can effectively combine any permutation and combination of the above at the application layer.

On top of this, Prisma is language agnostic. All we need to do it come up with “GraphQL bindings” in the corresponding language. “What are GraphQL bindings?” is a question answered very well in this post.

Setup

Let us start with baby steps, installation first.

npm install -g prisma

This should give you a globally installed prisma command (can be installed via npm in a scoped way as well). Help available at prisma help or prisma help <command>.

You will also need to install docker, docker-compose, and node for the setup to work.

Prisma init Command Breakdown

Now we can use the prisma init command to get started. Run the command, go go go… did you do it yet. Let us explore the options it provides:

divyendusingh [prisma-examples]\$ prisma init? How to set up a new Prisma service? (Use arrow keys)❯ Minimal setup: database-only GraphQL server/fullstack boilerplate (recommended)

Let us go with the recommended option in this step.

Running $ graphql create ...
? Directory for new GraphQL project (.)

It is running a CLI command called graphql — Wait what? — Remember your training (read as GraphQL Ecosystem — GraphQL CLI) soldier.

Straight from the docs: 📟 graphql-cli is a command line tool for common GraphQL development workflows.

Knowing this is helpful, prisma for many operations uses underlying graphql command.

We can specify the directory and move on, note that if you choose the current directory (via .), it must be empty.

The next question it asks you is to choose a boilerplate:

? Choose GraphQL boilerplate project:
  node-basic              Basic GraphQL server (incl. database)
❯ node-advanced           GraphQL server (incl. database & authentication)
  typescript-basic        Basic GraphQL server (incl. database)
  typescript-advanced     GraphQL server (incl. database & authentication)
  react-fullstack-basic   React app + GraphQL server (incl. database )

Let us go with node-advanced. Although, you can go with whatever you feel like, concepts won’t vary much.

We will notice the it gets the boilerplate(s) from a specific repository.

? Choose GraphQL boilerplate project: node-advanced GraphQL server (incl. database & authentication) [graphql create] Downloading boilerplate from https://github.com/graphql-boilerplates/node-graphql-server/archive/master.zip...

Which brings us to the next question. Can you create your own boilerplates? Absolutely!! — ping me or join the community if you want to explore this further.

Moving on to the next question:

? Please choose the cluster you want to deploy "deep-dive@dev" to (Use arrow keys)❯ prisma-eu1      Public development cluster (hosted in EU on Prisma Cloud)
  prisma-us1      Public development cluster (hosted in US on Prisma Cloud)
  local           Local cluster (requires Docker)Log in or create new account on Prisma CloudNote: When not logged in, service deployments to Prisma Cloud expire after 7 days.
  You can learn more about deployment in the docs: http://bit.ly/prisma-graphql-deployment

The CLI prompts you to select a cluster to which your Prisma service should be deployed. At the time of writing, there are two public clusters, however, we will use the local cluster (docker based) for this post. Again, feel free to choose any — concepts don’t change much.

Aha! Selecting a cluster runs a few more commands, here is the output printed by the CLI:

? Please choose the cluster you want to deploy "deep-dive@dev" to
Added cluster: local to prisma.yml
Creating stage dev for service deep-dive ✔
Deploying service `deep-dive` to stage `dev` on cluster `local` 1.7sChanges:
... (many changes, not listed for brevity)Applying changes 3.8sHooks:
Importing seed dataset from `seed.graphql` 733msYour GraphQL database endpoint is live:HTTP:  http://localhost:4466/deep-dive/dev
WS:    ws://localhost:4466/deep-dive/devChecking, if schema file changed 366ms
Writing database schema to `src/generated/prisma.graphql`  1ms
Running $ graphql prepare...Next steps:
  1. Change directory: `cd deep-dive`
  2. Start local server and open Playground: `yarn dev`

Let us explore this output, bit by bit.

Deploying service deep-dive to stage dev on cluster local 1.7s

Internally, this is running the prisma deploy command (we will be using this command very often). The three keywords we can see in this line of the output are service, stage, cluster.

Interestingly, all of this can be set/changed in prisma.yml file. We have one such file via the boilerplate that is being used here.

Changes:
... (many changes, not listed for brevity)
Applying changes 3.8s

The prisma deploy command also takes a diff of the already deployed service (in this case there is none, because we are deploying the service for the first time) and applies the relevant changes.

Hooks: Importing seed dataset from seed.graphql 733msYour GraphQL database endpoint is live:HTTP: http://localhost:4466/deep-dive/dev WS: ws://localhost:4466/deep-dive/dev

If you want a service to have some initial data, you can use the seed option in prisma.yml file.

As suggested by the output, the Prisma endpoint is live and you have built-in web sockets (GraphQL Subscriptions support, wow! wow!).

Checking, if schema file changed 366ms
Writing database schema to `src/generated/prisma.graphql`  1ms
Running $ graphql prepare...

Aha! Next up is the generated prisma.graphql file, this is essentially the full GraphQL API prisma provides for you based on your datamodel.graphql file. It defines the CRUD operations for the types in datamodel.graphql file.

Again, Prisma is using the underlying graphql prepare command which in GraphQL CLI docs is described as following :—

graphql prepare                Bundle schemas and generate bindings

More on this later in the post — This is very important.

Lastly, the output suggests you to cd into the directory cd deep-dive. Let us do that and explore the directory structure before running yarn dev command.

Directory Structure

divyendusingh [deep-dive]$ tree -I node_modules
.
├── README.md
├── database
│   ├── datamodel.graphql
│   ├── prisma.yml
│   └── seed.graphql
├── package.json
├── src
│   ├── generated
│   │   └── prisma.graphql
│   ├── index.js
│   ├── resolvers
│   │   ├── AuthPayload.js
│   │   ├── Mutation
│   │   │   ├── auth.js
│   │   │   └── post.js
│   │   ├── Query.js
│   │   └── index.js
│   ├── schema.graphql
│   └── utils.js
└── yarn.lock5 directories, 15 files

Let us explore the most important bits and see how they connect together.

The database folder:

The src folder:

Run the Application Server

Now, after taking a brief look at the directory structure, we can run the yarn dev command to run the application server. Now, we have the following endpoints available.

http://localhost:4000 — Application server with custom schema that we just explored. This is for the GraphQL API defined by schema.graphql.

http://localhost:4466/deep-dive/dev — Prisma service with generated GraphQL schema, this was setup by initial prisma deploy. This is for the GraphQL API defined by database schema in prisma.graphql (also known as Prisma database schema) file.

GraphQL Playground should be available if you open these links in browser.

Resolvers in application server use the underlying prisma service by using bindings.

Teach a Man How to Fish

Teach a man how to fish — Photo by Nathaniel Shuman on Unsplash
Teach a man how to fish — Photo by Nathaniel Shuman on Unsplash

Let us discuss more about the generated prisma.graphql file (which is also known as Prisma database schema) in this section.

This file contains the full API of what prisma offers you against the current state of your datamodel.graphql file. Given this information and the nature of a GraphQL schema.

You can start to explore the potential use of documented and undocumented feature here. I am a strong believer of “code as documentation” and this is what I used when Prisma was in beta and documentation was not up to date (it is in excellent shape now).

For example, for our deployed service so far, you can go to prisma.graphql and search for type Query, type Mutation, type Subscription to explore the full exposed potential of this API (note that this information is also available in the GraphQL Playground, remember the ecosystem soldier) and drill down on various input types from there. Doing so, will give you a solid understanding of the API and the undocumented features.

Access the Database

Where is my data?

In this service, it is in a MySQL hosted using Docker but it can be in your own database as well.

To access the database, run the following command

docker exec -it prisma-db mysql -u root --host 127.0.0.1 --port 3306 --password=graphcool

Notice that the password can be either graphcool or prisma by default.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| deep-dive@dev      |
| graphcool          |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
8 rows in set (0.00 sec)

Notice that our database has the name in the shape of <service>@<stage>

mysql> show tables;
+-------------------------+
| Tables_in_deep-dive@dev |
+-------------------------+
| Post                    |
| User                    |
| _PostToUser             |
| _RelayId                |
+-------------------------+
4 rows in set (0.00 sec)mysql>

And we have tables matching types in our datamodel.graphql file. Prisma did all of this for us. Nice.

Permissions

How do I authenticate requests in this system?

We have two services, the application server and the Prisma service and both will need authentication (and possibly authorization of sorts).

Let’s talk about the Prisma service first, you might have noticed a secret field in prisma.yml file and the same secret again in src/index.js in your application server part of things. As documented here (https://www.prisma.io/docs/reference/prisma-api/concepts-utee3eiquo#service-token), that secret is used to sign a JWT token that we have to pass to authenticate the requests.

Which is why you need to mention the secret in application server part for it to be able to talk to Prisma service and in playground of Prisma service, you manually need to send HTTP Authorization header as documented (https://www.prisma.io/docs/reference/prisma-api/concepts-utee3eiquo#service-token).

You might also be interested in the prisma token command. Find out more about it in docs or by typing prisma help token.

Next is the application server part. It is unauthenticated by default and you need to add it yourself like you would do for any node service.

Since, we have installed the node-advanced boilerplate, we have the raw material to do this already.

Check out the src/utils.js file, it mentions how you can get the user id from an in coming JWT token HTTP Authorization header.

Then you can check how the exposed function getUserId is used in query resolvers file at src/resolvers/Query.js to get the user id and fetch drafts for the “logged in” user.

Advanced Queries

The documentation (https://www.prisma.io/docs/reference/prisma-api/queries-ahwee4zaey) overs the queries and limitations very well but let us use our “Teach a man how to fish” method to explore queries. Dive into the generated prisma.graphql file and search for type Query. You will land on the following.

type Query {
  posts(
    where: PostWhereInput
    orderBy: PostOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): [Post]!

  users(
    where: UserWhereInput
    orderBy: UserOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): [User]!

  post(where: PostWhereUniqueInput!): Post

  user(where: UserWhereUniqueInput!): User

  postsConnection(
    where: PostWhereInput
    orderBy: PostOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): PostConnection!

  usersConnection(
    where: UserWhereInput
    orderBy: UserOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): UserConnection!

  node(id: ID!): Node
}

And you can see the whole Prisma API (WRT to your current deployed datamodel.graphql file) in front of you.

Let us explore the posts field (and related input type PostWhereInput) further:

type query {
  posts(
    where: PostWhereInput
    orderBy: PostOrderByInput
    skip: Int
    after: String
    before: String
    first: Int
    last: Int
  ): [Post]!
  # ... Other fields ...
}

input PostWhereInput {
  AND: [PostWhereInput!]
  OR: [PostWhereInput!]
  id: ID
  # ... Other fields ...
}

Just by looking at this snippet, you understand that you can do AND and OR operations recursively in the where argument. It argument supports filtering by scalar and nested fields.

Similarly, you can explore the input type for orderBy i.e. by searching input PostOrderByInput in the generated prisma.graphql file.

Pagination support is provided by Prisma by providing a relay style connection object. Take a look at the documentation or just look for “connection” in the generated prisma.graphql file.

Limitations are clearly documented in the docs, most notably they are: —

  1. orderBy not available for multiple fields or by related fields.
  2. In the where clause, scalar list filters or JSON filters are not available.
  3. A maximum of 1000 nodes can be returned per pagination field on the public cluster. This limit can be increased on other clusters using the cluster configuration.

Best part is that you can join in the discussion or even write code to make it happen.

Advanced Mutations

Again, the documentation covers this really well but let us stick to use our “Teach a man how to fish” method to explore mutations. Dive into the generated prisma.graphql file and search for type Mutation. You will land on the following.

type Mutation {
  createPost(data: PostCreateInput!): Post!

  createUser(data: UserCreateInput!): User!

  updatePost(data: PostUpdateInput!, where: PostWhereUniqueInput!): Post

  updateUser(data: UserUpdateInput!, where: UserWhereUniqueInput!): User

  deletePost(where: PostWhereUniqueInput!): Post

  deleteUser(where: UserWhereUniqueInput!): User

  upsertPost(
    where: PostWhereUniqueInput!
    create: PostCreateInput!
    update: PostUpdateInput!
  ): Post!

  upsertUser(
    where: UserWhereUniqueInput!
    create: UserCreateInput!
    update: UserUpdateInput!
  ): User!

  updateManyPosts(data: PostUpdateInput!, where: PostWhereInput!): BatchPayload!

  updateManyUsers(data: UserUpdateInput!, where: UserWhereInput!): BatchPayload!

  deleteManyPosts(where: PostWhereInput!): BatchPayload!

  deleteManyUsers(where: UserWhereInput!): BatchPayload!
}

Let us explore the type User for mutations. We can notice that we have the following methods for mutation of the type User.

type Mutation {
  createUser(data: UserCreateInput!): User!

  updateUser(data: UserUpdateInput!, where: UserWhereUniqueInput!): User

  deleteUser(where: UserWhereUniqueInput!): User

  upsertUser(
    where: UserWhereUniqueInput!
    create: UserCreateInput!
    update: UserUpdateInput!
  ): User!

  updateManyUsers(data: UserUpdateInput!, where: UserWhereInput!): BatchPayload!

  deleteManyUsers(where: UserWhereInput!): BatchPayload!

  # ... Other fields ...
}

We can see that we have the ability to createUser, updateUser, deleteUser, upsertUser, updateManyUsers, and deleteManyUsers.

Not only that if we drill down and explore input UserCreateInput type.

input UserCreateInput {
  email: String!
  password: String!
  name: String!
  posts: PostCreateManyWithoutAuthorInput
}

We can see that we have another input type for creating/connecting posts for a user. Let us explore that by searching for input PostCreateManyWithoutAuthorInput.

input PostCreateManyWithoutAuthorInput {
  create: [PostCreateWithoutAuthorInput!]
  connect: [PostWhereUniqueInput!]
}

We have the following fields available:

Similarly, we have input PostUpdateManyWithoutAuthorInput in input UserUpdateInput which looks like this:

input PostUpdateManyWithoutAuthorInput {
  create: [PostCreateWithoutAuthorInput!]
  connect: [PostWhereUniqueInput!]
  disconnect: [PostWhereUniqueInput!]
  delete: [PostWhereUniqueInput!]
  update: [PostUpdateWithoutAuthorInput!]
  upsert: [PostUpsertWithoutAuthorInput!]
}

Here we have more options than in input UserCreateInput and it makes sense intuitively. While creating a user you can only do create and connect but while updating a user you can perform create, connect, disconnect, delete, update, and upsert on a post.

A good though experiment is why can’t we do an upsert while creating a post. Ping me or join the community if you want to explore this further.

Plugs

And that is all for this post, it was a great experience working on this. If you like my work and would like to subscribe to interesting future posts, please subscribe below.