The words of every junior dev right before I have to spend a weekend undoing their crap.
I’ve been there too many times.
There are always edge cases you need to account for, and you can’t account for them until you run tests and then verify the results.
And you’d be parsing billions upon billions of records. Not a trivial thing to do when running multiple tests to verify. And ultimately for what is a trivial payoff.
You don’t screw around with infinitely invaluable prod data of your business without exhausting every single possibility of data modification.
It’s a piece of cake.
It hurts how often I’ve heard this and how often it’s followed by a massive screw up.
The words of every junior dev right before I have to spend a weekend undoing their crap.
There are so many ways this can be done that I think you are not thinking of. Say a user goes to “shreddit” (or some other similar app) their comments. They likely have thousands. On every comment edit, it’s quite easy to check the last time the users edited one of their comments. All they need is some check like checking if the last 10 consecutive comments were edited in hours or milliseconds/seconds. After that, reddit could easily just tell the user it’s editing their comments but it’s not. Like a shadowban kind of method. Another way would be at the data structure level. We don’t know what their databases and hardware are like, but I can speculate. What if each user edited comment is not an update query on a database, but an add/insert. Then all you need to do is update the live comments where the date is before the malicious date where the username=$username. Not to mention when you start talking Nimble storage and stuff like that, the storage is extremely quick to respond. Hell I would wager it didn’t even hit storage yet, probably still on some all flash cache or in memory. Another way could be at the filesystem level. Ever heard of zfs? What if each user had their own dataset or something, it’s extremely easy and quick to roll back a snapshot, or to clone the previous snapshot. There are so many ways.
At the end of the day a user is triggering this action, so we don’t necessarily need to parse “billions” of records. Just the records for a single user.
The words of every junior dev right before I have to spend a weekend undoing their crap.
I’ve been there too many times.
There are always edge cases you need to account for, and you can’t account for them until you run tests and then verify the results.
And you’d be parsing billions upon billions of records. Not a trivial thing to do when running multiple tests to verify. And ultimately for what is a trivial payoff.
You don’t screw around with infinitely invaluable prod data of your business without exhausting every single possibility of data modification.
It hurts how often I’ve heard this and how often it’s followed by a massive screw up.
There are so many ways this can be done that I think you are not thinking of. Say a user goes to “shreddit” (or some other similar app) their comments. They likely have thousands. On every comment edit, it’s quite easy to check the last time the users edited one of their comments. All they need is some check like checking if the last 10 consecutive comments were edited in hours or milliseconds/seconds. After that, reddit could easily just tell the user it’s editing their comments but it’s not. Like a shadowban kind of method. Another way would be at the data structure level. We don’t know what their databases and hardware are like, but I can speculate. What if each user edited comment is not an update query on a database, but an add/insert. Then all you need to do is update the live comments where the date is before the malicious date where the username=$username. Not to mention when you start talking Nimble storage and stuff like that, the storage is extremely quick to respond. Hell I would wager it didn’t even hit storage yet, probably still on some all flash cache or in memory. Another way could be at the filesystem level. Ever heard of zfs? What if each user had their own dataset or something, it’s extremely easy and quick to roll back a snapshot, or to clone the previous snapshot. There are so many ways.
At the end of the day a user is triggering this action, so we don’t necessarily need to parse “billions” of records. Just the records for a single user.