It's an interesting idea. In the event of a concurrency fail in which record 100 is deleted at the same moment it's updated, we would just recreate the old data at auto-incrementing id 100! New records would start getting added downstream past 100 obviously at the point where the counting left off. So what's stopping someone from handing in an id millions ahead in sequence or running the id count up to at or just before the cap in this way? We discussed this concept at my work but did not implement it and I can't recall how we were maybe gonna get around that pitfall.
No comments:
Post a Comment