[BIG performance issue] Table never loads with multiple large JSON fields

I have a table with about 20 rows, each row containing 1 large JSON ARRAY columns with between 3,000 and 20,000 items in it.

It looks like even in the Table view, 8base is trying to fetch ALL of that data before displaying the individual rows.

That seems really unnecessary, especially given how it only shows maybe the 20-40 first characters of the JSON field in the column.

I’ve been waiting for my table to load for over 10 minutes and nothing is happening.

For this view, it should really only be fetching what’s necessary to display it. (maybe a bit more) then fetch the rest if there are massive fields, when you open the edit modal.

EDIT: I decreased the size of my JSON ARRAY to max 6,000 items. But it’s still not loading.

EDIT: trying to fetch all the data at once causes the following error: (I don’t ever need to fetch all of them at once, just 1 at a time, which works fine) But the Table tries to fetch all of it at once as far as I can see.

{
“errors”: [
{
“message”: “Network error: Failed to fetch”,
“stack”: “Error: Network error: Failed to fetch\n at new t (https://app.8base.com/static/js/51.48c55511.chunk.js:1:1495224)\n at https://app.8base.com/static/js/51.48c55511.chunk.js:1:1516048\n at https://app.8base.com/static/js/51.48c55511.chunk.js:1:1522162\n at Set.forEach ()\n at https://app.8base.com/static/js/51.48c55511.chunk.js:1:1522139\n at Map.forEach ()\n at e.broadcastQueries (https://app.8base.com/static/js/51.48c55511.chunk.js:1:1522090)\n at https://app.8base.com/static/js/51.48c55511.chunk.js:1:1514688
}
]
}

Hi @MarkLyck, WOW, you are the first person who tried to store such large JSON in the fields :slight_smile: We will investigate it.
Question: What is the case where are you trying to store such large JSON in the field? We might suggest better ways to do that.

1 Like

The table are “Stocks” 1 row for each stock.

And one of the columns are historic prices. (I’ve decreased it to only go back to 2000) But it’s still too much for 8base to handle displaying.

The reason I have this in a JSON column and not a separate table for prices:

#1 it’s easier to manage as when we get the data it comes in as 1 giant update which is already in this format (and I need to update it retrospectively, which means replacing all the old records anyway).

#2 it seems like the updateMany, deleteMany, createMany mutations aren’t quite finished yet. and there’s no way I’m looping over 6k+ entries for every stock in the database. (bonus, I don’t have to deal with potential errors if any single row failed to update/delete/create for whatever reason)

#3 I won’t ever need to request all of this data at once. It’ll be the occasional request for this specific column 1 stock at a time. (when a user goes into a detail view and specifically wants to see historic data)

#4 performance. I haven’t tested this with 8base, but the last database I used, it was much faster to fetch 1 row with 1 large JSON column with no filter than it was to fetch 3k-20k individual rows with a filter. (But please let me know if this is not the case for 8base). With the JSON method I won’t need to filter in hundreds of thousands of rows.

#5 I’ll never need to get partial data. I’ll always need the full dataset, never a subset of it. So there’s little benefit to using individual rows in that sense.

Either way. This would be a nice performance gain for 8base, not just for JSON fields but also large text fields, if you e.g. store a “Preview” column for large fields. Then fetch that preview column which only has maybe 20 characters to display in the data browser. Rather than having to fetch all the data for every single thing. Should speed things up regardless of my database design.

I will have other large text-fields as well, as I move my applications to 8base. Such as articles and comments, which will have the same problem (Although maybe not as dramatic as this JSON example)

(In the Edit view, you’d obviously have to fetch the full fields, but only for 1 specific item, which will be fast enough)

Maybe when there’s a deleteMany & updateMany w/ filters, I can switch over to store it in database rows if it’s not too slow. But until then I think the best option is JSON.

It seems like you have reached the aws lambda reponse size limit. We have investigated this problem before. You could find the lambda limits in the doc https://docs.aws.amazon.com/en_us/lambda/latest/dg/limits.html. But real limit of response is ~4mb.

Probably your problem might be partially resolved if we replace infinite scroll to pagination and you could select smaller number of records. (already in our pipeline)
Anyways, you could try to use api explorer or custom resolver, or some custom functionality for fetching / updating the data.

1 Like

Okay :confused: Thanks for investigating though!

Do you guys have an update on the updateMany & deleteMany functionality?

Regardless I would still recommend not fetching all the data for an entire table if there are large fields that aren’t being shown anyway. (It’s wasted download and size of networks requests) Will speed things up in general for all table views that has more than just tiny fields.

@MarkLyck this is known issue. I will check how soon we could implement it with backend team and let you know.

1 Like