New deleteMany generated resolver?

Would be great to have a deleteMany like how we have createMany.

Can easily work around this by looping through a list and running delete on each item, but would be great to instead have a single atomic operation - either they all delete or none do.

1 Like

Hey Sam! Both deleteMany and updateMany are ones that we’re looking for the best way to handle. I’ll let you know as soon as we throw them in a sprint.

Quick question though, how would you like to see them handled from an error handling standpoint? For example, on deleteMany, if a record cannot be deleted should the entire transaction fail? What would a the graphQL response object look like? Love to hear your ideas!

Cool, updateMany was going to be my follow-up question.

Personally would prefer it to be atomic - either it all goes through or the whole thing gets rolled back.

If it’s not that then we can already replicate that pretty easily by looping through to make requests and then using Promise.all to handle them all.

Response object just the same as for the current ones (success/fail)?

1 Like

Thanks for the feedback @samwoolertonLW ! I’ve seen some funky examples where certain objects could go through and others return errors. It looked neat but felt wrong.

Yeah the response object could be either success: Boolean. However, there are a number of ways to think about updateMany as well. Check out this example.

mutation {
  userUpdateMany(filter: {
    email: {
      ends_with: "@smith.com" 
    }
  },
  data: {
    company: {
      connect: {
        name: "Smith, Inc."
      }
    }
  })
    count
    items {
      name
      email
    }
  }
}

Here we are saying that all record that match the filter, update their relationship with the same update. On the flip side, update many could be for many different updates in one batch.

mutation {
  userUpdateMany(data: [
    {
      id: "iw87g983gdq",
      name: "User's new name"
    },
    {
      id: "i379qd9739g397g",
      name: "Other User's new name"
    },
    {
      id: "7q39g8q6q86f86fq",
      phoneNumber: "+1-038-338-9276"
    },
  ])
    success
  }
}

Where here we are saying to update each of these records separately but in the same transaction.

Which one do you see as solving your use cases better?

1 Like

Those both look really useful!

My use-case is closer to the first, in that I have an array of IDs and want to a property to be the same for all of them. Not sure if that’s currently supported as a filter type but if not then that would be really handy too (filter if ID in this list of IDs)

1 Like

Got it. Just so you know, too, you can currently do an update many-ish command using aliases. Check out the following example - in case you haven’t tried it.

mutation {
  alias1: attendeeUpdate(data: {
    id: "ck24vm8it001l01i99z4r6vzj",
    name: "Jacob Jones"
  }) { ...attendeeFrag }
  
  alias2: attendeeUpdate(data: {
    id: "ck24vm8lk001n01i9b1qk1uul",
    name: "Tod Coffee"
  }) { ...attendeeFrag }
  
  anotherAlias: attendeeUpdate(data: {
    id: "ID_DOESNT_EXIST",
    name: "Tod Coffee"
  }) { ...attendeeFrag }
}

fragment attendeeFrag on Attendee {
  id
  name
  createdAt
}

In this example, all the updates are being sent in a single request, however since the last aliases ID doesn’t exist the returns an error EVEN THOUGH the other records updated successfully.

{
  "data": null,
  "errors": [
    {
      "message": "The request is invalid.",
      "locations": [
        {
          "line": 10,
          "column": 3
        }
      ],
      "path": [
        "anotherAlias"
      ],
      "code": "ValidationError",
      "details": {
        "id": "Record for current filter not found."
      }
    }
  ]
}

If I run the same mutation WITHOUT the missing item, it runs successfully and I get the following response.

{
  "data": {
    "alias1": {
      "id": "ck24vm8it001l01i99z4r6vzj",
      "name": "Jacob Jones",
      "createdAt": "2019-10-24T15:43:36.870Z"
    },
    "alias2": {
      "id": "ck24vm8lk001n01i9b1qk1uul",
      "name": "Tod Coffee",
      "createdAt": "2019-10-24T15:43:36.968Z"
    }
  }
}
3 Likes

That’s pretty cool, will keep it in mind thanks!
I need all to go through or none for my use-case but may be useful in future

This is really interesting.

I will soon be needing an updateMany, createMany and deleteMany as well.

My use case falls between these though.

I have ~3000 rows I need to update in one go every week. Each row will have a unique identifier that is not the ID.

The really tricky part is the new dataset may have records that did not exist in the previous week’s dataset.

So for me. If the entire request fails and gets rolled back if it’s trying to update an UID that doesn’t exist. That would force me to instead use either deleteMany + createMany to get my example to work, or first do a lookup and compare which 2900+ records i “can” update safely and then add a createMany for the remainder.

The “ideal” but possibly not best solution for 8base to handle for my useCase is that it updates the ones it can, and gives me an error with the information of the ones that failed, so after it “fails” the few, I can create the missing. But I do realize that’s kind of a “strange” way to handle it and in general for you guys it probably makes more sense to have the entire thing fail like Sam said.

(it would be really cool to have both options though, 1 mutation that fails the whole set, and 1 that just fails per row)

In the past I’ve had to loop and make a request for every single row to delete which OFTEN fails in GraphQL cause there’s just sooo many records and the request times out. This is incredibly annoying… So at least if there’s a deleteMany/createMany that can delete +3k records at once, that will be a dramatic improvement.

Can you use the aliases approach Sebastian mentioned above? Just looking at the error response, seems like it would return a list of every alias that failed (and you’d generate those dynamically so you should be able to handle it pretty easily) - that way a bunch work for updating and the rest you just create new.

Also, you mentioned it timing out - are you waiting for each loop to complete before doing the next, or using Promise.all on an array of all 3k requests?

The looping was with the last BaaS I was using (graph.cool). Cause they didn’t have an updateMany or createMany mutation. Only deleteMany

I ended up batching 100 at a time and waiting for it to resolve with Promise.all, because it was too unstable. (it just took ridiculously long to update my dataset). Hoping 8base can help resolve that, when I get to that feature.

and yeah if the alias method works with 3000+ records. That will solve this nicely. Only worry is how long it will take, and if the network request will time out (on the Browser’s side) and I won’t ever get the response.

@samwoolertonLW I’m pretty sure that the error returns for the first failed record. I’ll need to play with it a little more to figure out the exact behavior, but it can be tested in the API Explorer easily.

This is a great test case for us to try to include the updateMany/deleteMany resolvers @MarkLyck. Out of curiosity, what is the unique identifier? Could it be one that a set of records could get filtered by? Or does it have to be a list of unique identifiers?

What I’m trying to picture how you’d best be able to specify which 3000+ records need to be updated each week. Also, do all 3000 get the same update? Or is each record having different data saved to it?

The UID I mentioned is a stock “ticker”. (~2-4 letter string that identifies a stock) E.g. Apple (the company) is AAPL.

When we get a new set of data every week, we need to update ~3000 records at once, (each with different new data) and every week a few stocks could get removed, a few new stocks could get added. So if there are new stocks added. It won’t find a stock with a matching Ticker in the database before the update.

So I need to update ~3000 records every week with individual new data, based on a UID per row.

ideally in a way that doesn’t fail if there’s new records that didn’t exist before.

In a completely ideal world, there would be a mutation that updates an existing record if it’s there, or creates a new one if it’s missing. That would be my perfect scenario. (but that’s a bit too specific to ask to be priority feature request :sweat_smile:)

If I can do an update that just gives me a list of errors with each update that failed with an error message for “x doesn’t exist”, so I can then just loop over them and do a createMany after that would be perfectly fine.

The update with aliases kinda works for this, but it does feel like a dirty workaround.

Okay, epic. I get it. Are you only planning on tracking Nasdaq stocks? If so, then we only have to consider 3,300 stocks being tracked/updated.

While we don’t have an recordUpdateOrCreate mutation there are probably some ways this could be accomplished in less API calls lol.

I’m imagining the following:

You have a custom function – maybe called “weeklyStocksUpdate” – that receives all the new data as an argument (or a url to a file containing the data). That said, the date that needs to be updated has all the existing ticker symbols AND the new ticker symbols.

By querying your API, you could then retrieve a list of all the tickers you need to update (query { stocksList { items { ticker } } }). Now, finding the intersection and difference between these two lists lets us know which tickers we need to create and which we need to update.

let tickersToUpdate = []
let tickersToCreate = []

stockListUpdateTickers.forEach(ticker => {
    (stockList.items.includes(ticker) ? tickersToUpdate : tickersToCreate).push(ticker)
});

Once we have these lists, the create batch can be done quickly with a stocksCreateMany request. Then, the update ones could be handled (not ideal, but now much more reliably) using aliases in batches.

It’s not just Nasdaq, it’s also Canadian stocks and we have some filters on it based on marketcaps etc. (which is why the number changes from week to week as new stocks passes our filters and other ones fails our filters)

And yeah, what you proposed is exactly what I’m already doing with graph.cool

I was hoping to make things better, smoother and more performant overall with my move to 8base though, and this to me should be an unnecessary fetch & intersection.

(I’m a bit of a performance geek as you might be able to tell with my other posts on here :sweat_smile:)

And as it stands so far, I can’t use createMany cause the relationships doesn’t work in that mutation. Which means I have to create them with a loop as well.

It would be really nice to know what the roadmap of 8base are for some of these things. I can either invest a bunch of time now, and having to redo it later when you fix createMany, updateMany and deleteMany… Or what I’m doing now, is my 8base progress is basically put on a pause until I’m hoping this gets done “soon”.

But I have no idea where this stuff is in your pipeline? If it’s at the bottom of a backlog or coming in the current/next sprint.

2 Likes

@MarkLyck we will discuss upcoming features next week and I will let you know.

1 Like

any news on deleteMany functionality? @sebastian.scholl @ilya.8base

@MarkLyck we are going to start on this soon and it could take a month or so. I couldn’t tell you more now.

1 Like

Is there any updates on this? Is this something we can create our own custom resolver to accomplish?

1 Like

Coming through the pipeline soon! Should be out soon.

1 Like

We recently made an update to the API with a DeleteByFilter and DestroyByFilter mutation(s).

Docs here! https://docs.8base.com/docs/8base-console/graphql-api/mutations/#auto-generated-mutations

1 Like