New deleteMany generated resolver?

Would be great to have a deleteMany like how we have createMany.

Can easily work around this by looping through a list and running delete on each item, but would be great to instead have a single atomic operation - either they all delete or none do.

1 Like

Hey Sam! Both deleteMany and updateMany are ones that we’re looking for the best way to handle. I’ll let you know as soon as we throw them in a sprint.

Quick question though, how would you like to see them handled from an error handling standpoint? For example, on deleteMany, if a record cannot be deleted should the entire transaction fail? What would a the graphQL response object look like? Love to hear your ideas!

Cool, updateMany was going to be my follow-up question.

Personally would prefer it to be atomic - either it all goes through or the whole thing gets rolled back.

If it’s not that then we can already replicate that pretty easily by looping through to make requests and then using Promise.all to handle them all.

Response object just the same as for the current ones (success/fail)?

1 Like

Thanks for the feedback @samwoolertonLW ! I’ve seen some funky examples where certain objects could go through and others return errors. It looked neat but felt wrong.

Yeah the response object could be either success: Boolean. However, there are a number of ways to think about updateMany as well. Check out this example.

mutation {
  userUpdateMany(filter: {
    email: {
      ends_with: "@smith.com" 
    }
  },
  data: {
    company: {
      connect: {
        name: "Smith, Inc."
      }
    }
  })
    count
    items {
      name
      email
    }
  }
}

Here we are saying that all record that match the filter, update their relationship with the same update. On the flip side, update many could be for many different updates in one batch.

mutation {
  userUpdateMany(data: [
    {
      id: "iw87g983gdq",
      name: "User's new name"
    },
    {
      id: "i379qd9739g397g",
      name: "Other User's new name"
    },
    {
      id: "7q39g8q6q86f86fq",
      phoneNumber: "+1-038-338-9276"
    },
  ])
    success
  }
}

Where here we are saying to update each of these records separately but in the same transaction.

Which one do you see as solving your use cases better?

1 Like

Those both look really useful!

My use-case is closer to the first, in that I have an array of IDs and want to a property to be the same for all of them. Not sure if that’s currently supported as a filter type but if not then that would be really handy too (filter if ID in this list of IDs)

1 Like

Got it. Just so you know, too, you can currently do an update many-ish command using aliases. Check out the following example - in case you haven’t tried it.

mutation {
  alias1: attendeeUpdate(data: {
    id: "ck24vm8it001l01i99z4r6vzj",
    name: "Jacob Jones"
  }) { ...attendeeFrag }
  
  alias2: attendeeUpdate(data: {
    id: "ck24vm8lk001n01i9b1qk1uul",
    name: "Tod Coffee"
  }) { ...attendeeFrag }
  
  anotherAlias: attendeeUpdate(data: {
    id: "ID_DOESNT_EXIST",
    name: "Tod Coffee"
  }) { ...attendeeFrag }
}

fragment attendeeFrag on Attendee {
  id
  name
  createdAt
}

In this example, all the updates are being sent in a single request, however since the last aliases ID doesn’t exist the returns an error EVEN THOUGH the other records updated successfully.

{
  "data": null,
  "errors": [
    {
      "message": "The request is invalid.",
      "locations": [
        {
          "line": 10,
          "column": 3
        }
      ],
      "path": [
        "anotherAlias"
      ],
      "code": "ValidationError",
      "details": {
        "id": "Record for current filter not found."
      }
    }
  ]
}

If I run the same mutation WITHOUT the missing item, it runs successfully and I get the following response.

{
  "data": {
    "alias1": {
      "id": "ck24vm8it001l01i99z4r6vzj",
      "name": "Jacob Jones",
      "createdAt": "2019-10-24T15:43:36.870Z"
    },
    "alias2": {
      "id": "ck24vm8lk001n01i9b1qk1uul",
      "name": "Tod Coffee",
      "createdAt": "2019-10-24T15:43:36.968Z"
    }
  }
}
2 Likes

That’s pretty cool, will keep it in mind thanks!
I need all to go through or none for my use-case but may be useful in future

This is really interesting.

I will soon be needing an updateMany, createMany and deleteMany as well.

My use case falls between these though.

I have ~3000 rows I need to update in one go every week. Each row will have a unique identifier that is not the ID.

The really tricky part is the new dataset may have records that did not exist in the previous week’s dataset.

So for me. If the entire request fails and gets rolled back if it’s trying to update an UID that doesn’t exist. That would force me to instead use either deleteMany + createMany to get my example to work, or first do a lookup and compare which 2900+ records i “can” update safely and then add a createMany for the remainder.

The “ideal” but possibly not best solution for 8base to handle for my useCase is that it updates the ones it can, and gives me an error with the information of the ones that failed, so after it “fails” the few, I can create the missing. But I do realize that’s kind of a “strange” way to handle it and in general for you guys it probably makes more sense to have the entire thing fail like Sam said.

(it would be really cool to have both options though, 1 mutation that fails the whole set, and 1 that just fails per row)

In the past I’ve had to loop and make a request for every single row to delete which OFTEN fails in GraphQL cause there’s just sooo many records and the request times out. This is incredibly annoying… So at least if there’s a deleteMany/createMany that can delete +3k records at once, that will be a dramatic improvement.

Can you use the aliases approach Sebastian mentioned above? Just looking at the error response, seems like it would return a list of every alias that failed (and you’d generate those dynamically so you should be able to handle it pretty easily) - that way a bunch work for updating and the rest you just create new.

Also, you mentioned it timing out - are you waiting for each loop to complete before doing the next, or using Promise.all on an array of all 3k requests?

The looping was with the last BaaS I was using (graph.cool). Cause they didn’t have an updateMany or createMany mutation. Only deleteMany

I ended up batching 100 at a time and waiting for it to resolve with Promise.all, because it was too unstable. (it just took ridiculously long to update my dataset). Hoping 8base can help resolve that, when I get to that feature.

and yeah if the alias method works with 3000+ records. That will solve this nicely. Only worry is how long it will take, and if the network request will time out (on the Browser’s side) and I won’t ever get the response.