Sql update json postgresql

Updating JSON Data in PostgreSQL

If you’re storing JSON data in Postgres, you’ll eventually need to update it. In this post, we’ll talk about a couple of functions that can be used to update JSON data.

Aaron Bos | Saturday, March 20, 2021

Note: If you’d like to see the updated syntax for JSON support in PostgreSQL 14 checkout the post here!

The content in this post is directed at the functionality of PostgreSQL 13. If you are on a different version and something mentioned doesn’t work as expected, check the docs to verify that what is mentioned in this post exists in the version you’re on.

Functions to Update JSON

In this post, we will be focusing on what we can do to update values in an existing JSON document. Check out my previous blog post on querying JSON data in Postgres if you’re unfamiliar with the ->> operator syntax in the statements. If you’re used to updating data in a relational database you may try to do something like this.

I thought I could do something along those lines before realizing it doesn’t work 😞, but as anyone should do when they hit an unexpected roadblock, I decided to consult the Postgres documentation. After thinking about it for a bit, it makes sense as to why this method of updating JSON doesn’t work. Since the data we are working with is jsonb , essentially a blob of JSON data in binary format, we can’t just pick a value out of it and update it in place. Luckily Postgres provides a couple of functions to accomplish the task for us.

The first function that we’re going to look at is jsonb_set which has the following signature.

Let’s break down the function’s parameters to get a better understanding of how the function works.

  1. target — This is the jsonb value that will be updated and returned from the function.
  2. path — This is how we indicate which JSON key needs to be updated via a text[] . We can think of it as an absolute path in that if we have a nested object, we’ll need to specify all parent keys.
  3. new_value — This is the jsonb value that will be updated based on the path argument.
  4. create_if_missing — This is an optional boolean parameter that indicates if the key/value should be created if the key indicated by the path does not exist.

Now that we’ve covered the parameters provided by jsonb_set , let’s look at some examples. For our examples, we’ll be working with this JSON document. The table that we’ll be working with has an id column as well as a column called personb which is the column that stores our JSON below.

For this first example let’s say that we want to update the «first_name» key of this object to the value of «Dr. Clarence». Our update statement might look something like this.

With this statement we are using the jsonb_set function to update the personb column where the last_name key is «Ellis». This is a contrived example that assumes there is only one document in our dataset that has last_name=’Ellis’ . Since the «first_name» key that we wanted to update is at the first level of the JSON document, we only have to define ‘‘ for the path.

Let’s look at another example where we want to update the interests array to create a value when it does not exist. This will require specifying true for the optional create_if_missing parameter. (We explicitly passed false in our previous example) For this example, we want to add the value «Teaching» to the interests key.

You’ll notice that the path is a little different this time because we first needed to specify that we’ll be updating the interests key, which is an array with only two items in it (before updating). This path ‘‘ is saying that we want to update a JSON value located at the top-level interests key and we expect it to be an array by specifying 2 for the index. Since we specified true for the create_if_missing parameter, the function will create the item at index 2 of the interests array.

Let’s look at one more function available to us that’s used to update JSON. It is very similar to jsonb_set but there is a subtle difference that opens up a few more options to our update capabilities. The function I’m referring to is jsonb_set_lax and its signature looks like this.

I will skip the breakdown on the first four parameters because they are the same as jsonb_set . I will focus on explaining the optional null_value_treatment parameter. This parameter accepts text and will determine the function’s behavior when the new_value argument passed to the function is NULL . The list of predefined values that we can use for this parameter are as follows.

  1. ‘raise_exception’ — If new_value is NULL , an exception with be thrown.
  2. ‘use_json_null’ — If new_value is NULL , the JSON null value will be used.
  3. ‘delete_key’ — If new_value is NULL , the JSON key and value specified by path argument will be deleted.
  4. ‘return_target’ — If new_value is NULL , the function will return the target specified by path argument.

Let’s take a look at a couple of examples using jsonb_set_lax . In the first example we will attempt to set a JSON value to NULL when the null_value_treatment argument is ‘raise_exception’ . Here’s the SQL.

When we run this SQL statement we see the following exception message as expected.

I think that the remaining options for the null treatment parameter are relatively self-explanatory, so the final example will be what happens when we pass a value that isn’t one of the four predefined options. So this time we will attempt to run this.

In this scenario, you might expect the function to use the default option which is use_json_null . Fortunately, the function protects us from any unexpected behavior by raising this exception.

In this post, we covered a couple of different ways that we can utilize jsonb_set and jsonb_set_lax to update JSON data in Postgres. If you are storing JSON in Postgres, there will most likely come a time that you need to update specific keys or values and these are ways that I found useful when trying to solve that problem. Definitely check out the Postgres docs to learn more!


How to update objects inside JSONB arrays with PostgreSQL

by Leandro Cesquini Pereira

How to update objects inside JSONB arrays with PostgreSQL

How to update a specific value on a JSONB array

Let’s say you decided to store data in the database as json or jsonb and discovered that you just created new problems for yourself that you didn’t have before. You’re not alone.

JSONB is a powerful tool, but it comes at some cost because you need to adapt the way you query and handle the data.

And it’s not rare to load the entire jsonb object into memory, transform it using your preferred programming language, and then saving it back to the database. But, you just created another problem: performance bottlenecks and resource waste.

In this article let’s see how to update a specific value of an object inside an array with one query.

TL;DR: the final query is at the end of the article, and you can check out a live example at DB Fiddle to copy & paste and play with.

Suppose you’re implementing a customer screen to store dynamic contacts for each customer. Then you come up with the idea of storing the contacts as a JSONB column because they’re dynamic, and thus using a not relational data structure makes sense.

Then you create a customers table with a JSONB contacts column and insert some data into it:

Pretty easy right? But how can you update a specific contact for a specific customer? How to change Jimi’s email or Janis’ phone? ?

Fortunately, PostgreSQL is your friend and provides the jsonb_set function:

jsonb_set(target jsonb, path text[], new_value jsonb[, create_missing boolean])

Given a jsonb column, you can set a new value on the specified path:

The above selects will return:

To change Jimi’s email on the contacts list, you inform the path «1, value» which means the second object on the array (starting at 0) and the key value. That’s the path. The same applies to change Janis’ email, but its email object is at index 0.

Читайте также:  0x80096004 windows 10 при обновлении

You may be thinking: I just have to use jsonb_set on an update statement and it’s all done? That’s the idea, but that’s not enough yet.

The problem with non-relational data is that they’re dynamic. Well, that’s one of the reasons for using JSONB but that brings a problem: see that Jimi’s email object is at index 1 and Janis’ email object is at index 0 on the array, and another customer could have a very different array with different indexes. So, how can you discover the index of each contact type? ?

The answer is ordering the elements of the array and getting its index:

That query returns 1, which is the index of the email object (type email) inside the contacts array of the customer Jimi.

Now we have all the pieces of the puzzle: we know how to update a jsonb value and how to discover the index of the object to be updated.

The only step left is the update itself. Putting it all together we have:

The most important part of this query is the with block. It’s a powerful resource, but for this example, you can think of it as a «way to store a variable» that is the path of the contact you need to update, which will be dynamic depending on the record.

Let me explain a bit about this part:

It just builds the path as ‘<1, value>‘, but we need to convert to text[] because that’s the type expected on the jsonb_path function.

Wrapping up

JSONB is a great and valuable tool to resolve a lot of problems. But keep in mind that you also need to query and update this kind of data. That brings a cost that you have to consider when deciding which tools you pick to use.

Side note: that solution came out of a pair programming session with Lucas Cegatti.

Are you looking for a creative company to implement your next idea? Check out LNA Systems and let’s talk.


Fast & reliable JSONB deep structure update in PostgreSQL

Assuming you have a PostgreSQL database with a survey table. A survey has a questions column (JSONB), each question has choices .

Sample content of the questions column for the «web dev» survey :

Exit fullscreen mode

We want to replace the choice with ID 9beb5da3 (Angular) in question 73294354 (pick a frontend framework) for survey 8bf9fedb (the ID of the survey for web dev survey)

Exit fullscreen mode

To update a subpart of a JSONB column, we could use jsonb_set . Given a path with indices and keys, we can replace the target object with a new object:

Exit fullscreen mode

Still, the problem is that we don’t know the actual positions of the items in the arrays. We have to find them first. The only thing we know is the IDs of the object.

🤔 «I know I want to update choice 9beb5da3 in question 73294354 ; I don’t know what is their positions in their respective arrays.»

One could have run one first query to get the whole column data, then iterate over objects with their favorite programming language to find item positions, then use these positions to configure the jsonb_set path parameter in order to run the update query.

This approach has two caveats:

  • It’s slow; get all the record data from DB, find positions programmatically, then update the data in the database.
  • It’s not reliable: the positions of data could have changed during this process, between the two queries.

We can do this with only one operation: one update main query that contains two sub-queries to build indices.


First, we need to get the index of the question 73294354 :

Exit fullscreen mode

jsonb_array_elements function expands the array to a set of JSON values, with ordinality adds a number (starting from 1, increasing by 1) for each row of the function output. This query returns:

question questionindex
<"id": "73294354", "text": "pick a frontend framework", "choices": [<"id": "3feba90d", "text": "React">, <"id": "d6aa4dbd", "text": "Vue">, <"id": "9beb5da3", "text": "Angular">]> 2

Then, we need to get the index of the choice 9beb5da3 (which is in the question 73294354 : we have to be sure we search only in this question, in case some choices share the same index in different questions).

To do so, we have to use the first query as a subquery, then get the index:

Exit fullscreen mode

We just used again jsonb_array_elements and with ordinality , nothing new here. As a result, we have got the choice we want to change and it’s position.

Now, we are ready to update the record, since we have the index to build the path. Building the path is simply creating a string thanks to the index we have from previous steps:

Exit fullscreen mode

☝️ Since ordinality starts to 1, we have subtracted 1 to the indices.

We can modify the second query to directly return the path we want:

Exit fullscreen mode

For the last step, we need to create the update query that uses the two previous queries to actually update the choice:

Exit fullscreen mode

Exit fullscreen mode

It’s fast, since we are only reading and writing one row in survey . It’s reliable because we are doing only one query.

Feel free to post a comment if you need more information.

Top comments (0)

For further actions, you may consider blocking this person and/or reporting abuse

Update Your DEV Experience Level:

Go to your customization settings to nudge your home feed to show content more relevant to your developer experience level. 🛠


JSON in PostgreSQL: The Ultimate Guide

PostgreSQL has quite a lot of features for storing and working with JSON data.

In this guide, you’ll learn:

  • What JSON is and why you might want to use it
  • Creating a table to store JSON data
  • How to add, read, update, and delete JSON data
  • Tips for performance, validating and working with JSON

Let’s get into the guide.

If you want to download a PDF version of this guide, enter your email below and you’ll receive it shortly.

What is JSON and Why Should I Use It?

JSON stands for JavaScript Object Notation, and it’s a way to format and store data.

Data can be represented in a JSON format in PostgreSQL so it can be read and understood by other applications or parts of an application.

It’s similar to HTML or XML – it represents your data in a certain format that is readable by people but designed to be readable by applications.

Why Use JSON In Your Database?

So why would you use JSON data in your database?

If you need a structure that’s flexible.

A normalised database structure, one with tables and columns and relationships, works well for most cases. Recent improvements in development practices also mean that altering a table is not as major as it was in the past, so adjusting your database once it’s in production is possible.

However, if your requirements mean that your data structure needs to be flexible, then a JSON field may be good for your database.

One example may be where a user can add custom attributes. If it was done using a normalised database, this may involve altering tables, or creating an Entity Attribute Value design, or some other method.

If a JSON field was used for this, it would be much easier to add and maintain these custom attributes.

The JSON data can also be stored in your database and processed by an ORM (Object Relational Mapper) or your application code, so your database may not need to do any extra work.

What Does JSON Data Look Like?

Here’s a simple example of JSON data:

It uses a combination of different brackets, colons, and quotes to represent your data.

Let’s take a look at some more examples.

JSON data is written as name/value pairs. A name/value pair is two values enclosed in quotes.

This is an example of a name/value pair:

The name is “username” and the value is “jsmith”. They are separated by a colon “:”.

This means for the attribute of username, the value is jsmith. Names in JSON need to be enclosed in double quotes.

JSON data can be enclosed in curly brackets which indicate it’s an object.

This is the same data as the earlier example, but it’s now an object. This means it can be treated as a single unit by other areas of the application.

How does this help? It’s good for when there are multiple attributes:

Additional attributes, or name/value pairs, can be added by using a comma to separate them.

You’ll also notice in this example the curly brackets are on their own lines and the data is indented. This is optional: it’s just done to make it more readable.

JSON also supports arrays, which is a collection of records within an object. Arrays in JSON are included in square brackets and have a name:

In this example, this object has an attribute called “posts”. The value of posts is an array, which we can see by the opening square bracket “[“.

Inside the square bracket, we have a set of curly brackets, indicating an object, and inside those we have an id of 1 and a title of Welcome. We have another set of curly brackets indicating another object.

These two objects are posts and they are contained in an array.

And that covers the basics of what JSON is.

If JSON is new to you, don’t worry, it gets easier as you work with it more.

If you’re experienced with JSON, you’ll find the rest of the guide more useful as we go into the details of working with JSON in PostgreSQL.

How to Create and Populate a JSON Field in PostgreSQL

So you’ve learned a bit about JSON data and why you might want to use it.

How do we create a field in PostgreSQL?

Two JSON Data Types

There are two data types in PostgreSQL for JSON: JSON and JSONB.

JSON is the “regular” JSON data type and was added in PostgreSQL 9.2. JSONB stands for JSON Binary and was added in PostgreSQL 9.4.

What’s the difference?

There are a few differences, but it’s mainly to do with how the data is stored. JSONB data is stored in a binary format and is easier to process.

PostgreSQL recommends using the JSONB data type in most situations.

Here’s a table describing the differences.

Storage Stored exactly as entered Stored in a decomposed binary format
Supports full text indexing No Yes
Preserve white space Yes No, it is removed
Preserve order of keys Yes No
Keep duplicate keys Yes No (last value is kept)

Because of the way the data is stored, JSONB is slightly slower to input (due to the conversion step) but a lot faster to process.

In this guide, we’ll use the JSONB data type. Most of the features will work the same for the JSON data type, but we’ll focus on JSONB.

Creating a JSON Field

We create a new JSONB field to store our JSON data.

Here’s an example using a product table.

We have created a table called product. It has an id and a product name. There’s also an attributes column, which has the data type of JSONB.

Because it’s a JSONB data type, the values inserted into it must be valid JSON. Other databases require a check constraint to be added, but this is not required in PostgreSQL.

Adding a JSON column is as easy as that.

Adding Data to a JSON Field

Now we’ve got our field for storing JSON data, how do we add data to it?

We simply insert a record into our table as though it’s a text value. The value needs to be valid JSON, otherwise, we’ll get an error.

We can add our first product like this:

We can run this statement and the record is inserted.

If we try to insert an invalid JSON field, we’ll get an error. Here’s the INSERT statement:

This is what we’ll see:

Using the method above, we needed to enter the data in exactly the right format.

Inserting Arrays

If you want to insert JSON data that contains arrays, you can enter it using text in a JSON format.

Here’s how to insert an array by just specifying it in a JSON format.

This will insert a new product that has an array of drawers. As you can probably see by this statement, reading it (and writing it) is a bit tricky.

You can insert simpler arrays using this method too.

The INSERT statements will work.

Using JSON Functions to Insert Data

We just saw how to insert data into a JSONB field by providing text values.

They can be error-prone and hard to type.

Fortunately, PostgreSQL offers a few functions to help us insert data into a JSON field.

The JSONB_BUILD_OBJECT function will take a series of values and create a JSON object in binary format, ready to be inserted into a JSONB field.

We can use the JSONB_BUILD_OBJECT to construct a new record:

This will create a value that looks like this:

We can use this in the INSERT statement. We don’t need to worry about putting brackets, commas, colons, and quotes in the right place. As long as the string values are quoted, it should work.

The new record will be added to the table.

There are several other functions available to help you insert JSON values into a table:

  • to_json and to_jsonb
  • array_to_json
  • row_to_json
  • json_build_array and jsonb_build_array
  • json_object and jsonb_object

How to Read and Filter JSON Data in PostgreSQL

Once you’ve got some JSON data in a table, the next step is to read it.

How do we do that?

Selecting a JSON field is pretty simple. We can just select it as though it is a column.

id product_name attributes
1 Chair
2 Desk <“color”: “black”, “drawers”: [<“side”: “left”, “height”: “30cm”>, <“side”: “left”, “height”: “40cm”>], “material”: “metal”>
3 Side Table
4 Small Table

This shows us the data in the JSON column, and it looks just like a text value.

The good thing with this is that any application can easily read this field and work with it how they want (display it, filter it, and so on).

What if we wanted to do more in our database?

Selecting Individual Attributes

The JSON data is stored in something that looks like a text field. However, it’s quite easy to get attributes and values out of this text field and display them.

We can extract a value from the JSON field and display it in a separate column. We can do this using a few different symbols or notations.

To view a particular key’s value, we can specify the key as a “child” of the JSONB column.

We do this using the -> notation:

We are selecting the color element from the attributes column and giving it an alias of color_key.

Here are the results:

id product_name color_key
1 Chair “brown”
2 Desk “black”
3 Side Table “brown”
4 Small Table “black”

Notice that each value has a quote around it.

Alternatively, we can select only the values, which displays the value without quotes. The symbol for this is ->>.

Here’s our updated query. We’ll select both methods so you can see the difference

Here are the results.

id product_name color_key color_value
1 Chair “brown” brown
2 Desk “black” black
3 Side Table “brown” brown
4 Small Table “black” black

What if the key does not exist?

Not every record has a height key. Let’s see what happens if we select this.

id product_name height
1 Chair 60cm
2 Desk null
3 Side Table null
4 Small Table null

Null values are shown where the key does not exist.

Selecting Array Values

We can use a similar concept to select a value that is stored as an array. In our example, the “drawers” attribute is an array.

Our query would look like this:

The results are:

id product_name drawers
1 Chair null
2 Desk [<“side”: “left”, “height”: “30cm”>, <“side”: “left”, “height”: “40cm”>]
3 Side Table null
4 Small Table null

The full value of the drawers attribute is shown. It’s an array with two objects inside, and each object has a side key and a height key.

What if we want to see an attribute that’s inside another attribute? For example, the first of the “drawer” attributes?

We can do this by getting the element of drawers, then selecting the element at a specified number.

This example will get the attributes column, then the drawers element, then the element at position 1 (which is the 2nd element as arrays start at 0).

Notice how the -> symbol is used, as this symbol retrieves the element and not just the text.

Here’s a sample query.

The results are:

id product_name drawers
1 Chair null
2 Desk
3 Side Table null
4 Small Table null

We can use a different notation to get an array as either an object or a text. The notation #> will retrieve it as an element and #>> will retrieve it as text.

Let’s see an example.

After the notation, we specify curly brackets, and inside we specify the key name and the element number. In this example, we are looking for the drawers key and element 1, which is the second element.

Here are our results:

id product_name drawers_element drawers_text
1 Chair null null
2 Desk
3 Side Table null null
4 Small Table null null

In this example, both columns are the same. But we can see how this notation is used and it may be useful for us in our JSON queries.

Filtering on JSON Data in PostgreSQL

Let’s say we wanted to see our Chair product, which has a brown color, wood material, and a height of 60cm. But we want to filter on the JSON attributes for this example.

Let’s try this query.

We can run this query. Here’s what we see:

id product_name attributes
1 Chair

This works because the text value is stored in the attributes column, and as long as we provide the exact full string, it will work.

Often we need to filter on one of the keys or values. How can we do that?

What if we try using the LIKE keyword with a partial match?

We get an error in PostgreSQL as we can’t use the LIKE attribute on a JSONB column.

There are several features in PostgreSQL that make it possible to filter on JSON data.

Using Notation to Filter Data

Let’s say we want to find all products where the color is brown. The color is a part of the attributes JSON column in our table.

We can use the same notation we used for selecting a column for filtering it.

We can write the notation like this:

We then add this to our SELECT statement

We’ve added this notation to our WHERE clause, and added a condition where the color is equal to “brown”.

Here’s what happens when we run the query.

We get this error because the query is expecting a JSON object in the WHERE clause, and we provided the string of “brown”. It’s expecting a JSON object because we used ->.

We can use ->> to be able to provide a string.

The results of this query are shown below.

id product_name attributes
1 Chair
3 Side Table

We can see that the results only show records where the color attribute is brown.

Splitting JSON Data into Rows

So far we’ve seen examples of using different functions to read and filter JSON data.

The result has been that all of the JSON data has been displayed in a single column.

PostgreSQL includes some functions that let you expand the JSON fields and display them in different rows.

The JSONB_EACH function will display each key and value pair as elements. It will split each key into separate rows.

Here are the results:

id product_name jsonb_each
1 Chair (color,”””brown”””)
1 Chair (height,”””60cm”””)
1 Chair (material,”””wood”””)
2 Desk (color,”””black”””)
2 Desk (drawers,”[<“”side””: “”left””, “”height””: “”30cm””>, <“”side””: “”left””, “”height””: “”40cm””>]”)
2 Desk (material,”””metal”””)
3 Side Table (color,”””brown”””)
3 Side Table (material,”[“”metal””, “”wood””]”)
4 Small Table (color,”””black”””)
4 Small Table (material,”””plastic”””)

I can’t think of a reason that this function would be useful. Perhaps if you want to filter on attributes and display all of the attribute values as well.

You can also use JSON_OBJECT_KEYS to get a list of all keys in the JSON field:

The results are:

id product_name jsonb_object_keys
1 Chair color
1 Chair height
1 Chair material
2 Desk color
2 Desk drawers
2 Desk material
3 Side Table color
3 Side Table material
4 Small Table color
4 Small Table material

This can be useful to find all rows that have a specific key, or to find the most common keys:

attr_key count
color 4
material 4
drawers 1
height 1

Check If a Key Exists

PostgreSQL has other operators that let you find records that contain a certain JSON attribute.

One example is the ? operator. This will let you determine if a JSON value contains a specified key.

Let’s see an example. This query finds records that have the key of “drawers”.

The results are:

id product_name attributes
2 Desk <“color”: “black”, “drawers”: [<“side”: “left”, “height”: “30cm”>, <“side”: “left”, “height”: “40cm”>], “material”: “metal”>

The row with id 2 is shown as it’s the only one with the attribute of drawers.

How to Update JSON Data in PostgreSQL

Reading JSON is one thing. What if you need to update JSON data?

There are several ways to do this. We’ll look at each of them.

Insert a New Key by Concatenating Values

You can update a JSON field using an UPDATE statement. Using this UPDATE statement, you can add a new key and value to the field by concatenating it to the existing value.

Here’s the syntax:

If we want to add a new key and value pair to one of our products, we can concatenate a JSON value to the existing value and run it in the UPDATE statement.

Here’s our table before the update:

id product_name attributes
1 Chair

Here’s the update statement:

Here’s the table after the update:

id product_name attributes
1 Chair

Notice that the new width key is added to the JSON value. It’s also added in the middle, as the JSONB data type doesn’t preserve the order of the keys. This is OK, our JSON field still works as expected.

Updating an Existing Value Using JSONB_SET

The JSONB_SET function allows you to update an existing key to a new value. This is helpful if you don’t want to read and update the entire field just to change one value.

The syntax looks like this:

Let’s say we want to update the height for our Chair product from 60cm to 75cm.

Here’s the table before the update:

id product_name attributes
1 Chair

Our Update statement would look like this:

There are a few things to notice:

  • The first parameter of JSONB_SET is the attributes column.
  • The second parameter is the height key. This needs to be enclosed in curly brackets to be treated as a JSON key.
  • The third parameter is the new value of 75cm. This is enclosed in single quotes as it’s a string, and then double quotes as it’s a JSON value. Without the double quotes, you’ll get an error.

Here’s the table after the update:

id product_name attributes
1 Chair

No new attributes were added. The Update statement simply updated the existing value.

How to Delete from a JSON Field in PostgreSQL

There are two DELETE operations you can do when working with JSON fields:

  • delete an attribute from a JSON field
  • delete a row from your table

Deleting a Row using JSON_VALUE

Deleting a row from your table is done in the same way as regular SQL. You can write an SQL statement to delete the row that matches your ID, or using the notation.

For example, to delete all rows where the color attribute is brown:

This will remove the matching records from the table.

Removing an Attribute from a JSON Field

The other way to delete JSON data is to remove an attribute from a JSON field.

This is different from updating, as you’re removing the attribute entirely rather than just updating its value to something else.

We can remove an attribute from a JSON field using the – operator. We use the UPDATE statement and update the JSON field with the – operator and the key we want to remove.

For example, here’s our Chair product.

id product_name attributes
1 Chair

Let’s say we want to remove the “height” attribute. We can do this by writing an update statement and removing it.

After we run this statement, we can check our table again.

id product_name attributes
1 Chair

The height attribute has been removed.

Improve the Performance of JSON Queries

The JSON support and features in PostgreSQL are pretty good, and each version includes more features.

So, given that you can add JSON columns to tables, extract fields, and get all the flexibility of JSON fields with validation, wouldn’t it be better to just store all of your data in JSON fields rather than normalised tables?

Well, sometimes that might be a good idea. But then you may be better off using a NoSQL database rather than PostgreSQL.

Another reason why using primarily JSON fields to store your data is not a good idea is that it can struggle with performance.

Select Performance

For example, let’s say we want to select all products where the color is brown. We can use the ->> operator in the WHERE clause that we saw earlier in this guide:

Here is the query plan for this.

Seq Scan on product (cost=0.00..12.40 rows=1 width=454)
Filter: ((attributes ->> ‘color’::text) = ‘brown’::text)

The execution plan shows a Seq Scan, which is short for Sequential Scan and is a slow type of access. This might be OK for our table, which only has a few records, but once you start working with larger tables it can be quite slow.

Full-Text Index with GIN Index

PostgreSQL allows you to create a full-text index on the JSON field. This should improve the performance of any queries on this field.

It uses an index type called GIN, which stands for Generalised Inverted Index. It’s used in several situations, and one of which is for JSONB values.

Let’s see how we can create one.

We create an index based on the JSON_VALUE in the WHERE clause we want to run.

The name of the index is idx_prod_json. The product table is used, and the attributes column is specified. We add USING GIN to use the GIN feature or index type.

Now let’s run the Select query again and see the explain plan.

Seq Scan on product (cost=0.00..1.06 rows=1 width=454)
Filter: ((attributes ->> ‘color’::text) = ‘brown’::text)

We can see it still uses a Seq Scan, which may be because there are only four rows in the table.

However, we can see the cost is a lot less:

Before Index After Index
cost=0.00..12.40 cost=0.00..1.06

This may result in a faster query. The difference may be more evident on larger tables.

Tips for Working with JSON in PostgreSQL

In this guide, we’ve looked at what JSON is, seen how to create JSON fields in PostgreSQL, and seen a range of ways we can work with them.

So, what’s the best way to work with JSON fields in PostgreSQL?

Here are some tips I can offer for using JSON in PostgreSQL. They may not apply to your application or database but they are things to consider.

Just because you can, doesn’t mean you should

JSON is flexible and quite powerful, but just because you can store data in a JSON field, doesn’t mean you should. Consider using the advantages of the PostgreSQL relational database and using JSON where appropriate.

Treat the JSON field like a black box

The JSON field can be used to store valid JSON data sent or received by your application. While there are functions for reading from and working with the JSON field, it might be better to just store the JSON data in the field, retrieve it from your application, and process it there.

This is the concept of a black box. The application puts data in and reads data from it, and the database doesn’t care about what’s inside the field.

It may or may not work for your situation, but consider taking this approach.

Search by the Primary Key and other fields

We’ve seen that it can be slow to search by attributes inside the JSON field. Consider filtering by the primary key and other fields in the table, rather than attributes inside the JSON field. This will help with performance.


I hope you found this guide useful. Have you used JSON fields in PostgreSQL? What has your experience been like? Do you have any questions? Feel free to use the comments section on the post.


Поделиться с друзьями