___________________________________________________________________ Choosing between names and identifiers in URLs 157 points by bussetta https://cloudplatform.googleblog.com/2017/10/API-design-choosing... ___________________________________________________________________
jlg23 - 3 hours ago Missing for me: Timestamps. A lot of data is sufficiently unique if prefixed with a timestamp, which could be as simple and readable as /2017/10/17/my-great-blog-post/
spiralpolitik - 3 hours ago "The downside of the second example URL is that if a book or shelf changes its name, references to it based on hierarchical names like this one in the example URL will break."The author appears to have forgotten about 3xx redirection codes which were intended to solve that very problem.
fixermark - 1 hours ago 3xx redirection requires the backing-store to maintain some kind of permanent edit history, and is therefore not necessarily something one can assume one will have.There's also the problem of aliasing; if another book by the same name is later added to the shelf, the hierarchical name now references an entirely different resource.
sametmax - 3 hours ago But have been abused for black SEO and now are considered suspicious by search engines so we use them sparingly.This is why we can't have nice things.
always_good - 2 hours ago I don't buy it.Redirecting to canonical URLs is canonicalization 101. https://support.google.com/webmasters/ans wer/139066?hl=en#4Also, what would be an example of same-origin redirect abuse?
sametmax - 1 hours ago Bypassing black lists when posting links while still benefiting from crawlers following the links comes to mind.During the 2000s, following links for a forum or blog was way too expensive, so they had black lists of dirty words to avoid porn sites spaming and get juice during the page rank golden years where any back reference mattered.Hence it was just easier, to avoid the filters, to create non blacklisted domain names with redirections.Then another trick was to write a perfectly legitimate page, get google to index it, then redirect that page to the less legitimate page. Because at the time Google refreshed once a week (or a month...), you'd get plenty of traffic and revenue for long enough to be worth it. If you sold niche porn and viagra, that is.Another one was just to setup fake sites with different URL schemes with stats on them, and get a regular update on which URL formats were getting the best hits. At the time URLs where very important in getting points. Then you would regularly update your most important sites URL scheme accordingly, several times a year if needed.
always_good - 1 hours ago I have a hard time believing that modern search engines are so incapable that they have to devalue redirects to the point that honest users have to worry about it.
sametmax - 1 hours ago Well that's just what I know about the things we did then. I'm not working in porn anymore, so I'm missing the new cool tricks, or abuses, depending of your point of view. But the community is VERY creative.Now the last time I did change massively URLs for a client website and noticed a significant drop in traffic that took a few months to recover was years ago. So the situation might have changed. But I'm not going to test that assumption with my clients money :)
SquareWheel - 56 minutes ago What? 301s are the standard way to create redirects. That has nothing to do with blackhat SEO.The small dip caused by 301s was even recently removed altogether.
mcdan - 4 hours ago Isn't one problem with this is that intermediate caches now have two resources that represent the same thing, therefore invalidation of intermediate caches will be nearly impossible?
sneak - 3 hours ago Cache invalidation remains one of the two hard problems in computer science (the other being naming things and off by one errors).
kmicklas - 1 hours ago Off by one errors basically don't exist if you use modern languages and practices.
mjevans - 1 hours ago As long as humans are still providing input...
gipp - 57 minutes ago It's a joke.
yathern - 4 hours ago Great post - I quite like the stackoverflow.com style of `stackoverflow.com/questions//`, where can be changed to anything, and the link still works.This allows for easy URL readability, while also having a unique ID.In the context of this post (the library example) that would look likelibrary.com/books/1as03jf08e/Moby-Dick/
nayuki - 3 hours ago Amazon has been doing this URL scheme since many years ago, e.g.: https://www.amazon.com/Optional-product-name/dp/A00BCDEF00ID...
saurik - 3 hours ago Doing this means that:1) there are now an infinite number of URLs for every one of your pages that may end up separately stored on various services (mitigated for only some kinds of service if you redirect to correct),2) if the title changes the URLs distributed are now permanently wrong as they stored part of the content (and if you redirect to correct, can lead to temporary loops due to caches),3) the URL is now extremely long and since most users don't know if a given website does this weird "part of the URL is meaningless" thing there are tons of ways of manually sharing the URL that are now extremely laborious,4) have now made content that users think should somehow be "readable" but which doesn't even try to be canonical... so users who share the links will think "the person can read the URL, so I won't include more context" and the person receiving the links thinks "and the URL has the title, which I can trust more than what some random user adds".The only website I have ever seen which I feel truly understands that people misuse and abuse title slugs and actively forces people to not use them is Hacker News (which truncates all URLs in a way I find glorious), which is why I am going to link to this question on Stack Exchange that will hopefully give you some better context "manually".meta.stackexchange.com/questions/148454/why-do-stack- overflow-links-sometimes-not-work/Many web browsers don't even show the URL anymore: the pretense that the URL should somehow be readable is increasingly difficult to defend. A URL should sometimes still be short and easy to type, but these title slug URLs don't have that property in spades.If anything, other critical properties of a URL are that they are permanent and canonical, and neither of these properties tend to be satisfied well by websites that go with title slugs, and while including the ID in there mitigates the problem it leaves it in some confusing middle-land where part of the URL has this property and part of it doesn't.If you are going to insist upon doing this, how about doing it using a # on the page, so at least everyone had a chance to know that it is extra, random data that can be dropped from the URL without penalty and might not come from the website and so shouldn't be trusted?(edit to add:) BTW, if you didn't know you could do this, Twitter is most epic source of "part of the URL has no meaning" that I have ever run across as almost no one realizes it due to where it is placed in the URL.twitter.com/realDonaldTrump/status/247076674074718208
yathern - 3 hours ago > there are now an infinite number of URLs for every one of your pages that may end up separately stored on various servicesWhat services? Web crawlers? I'm sure the ones I would care about are smart enough to know how this works. There are many ways infinite valid URLs can be made. Query params, subdomains and hashroutes to name a few.> if the title changes the URLs distributed are now permanently wrong as they stored part of the content (and if you redirect to correct, can lead to temporary loops due to caches),You don't redirect. The server doesn't even look at the slug part of the URL for routing purposes. You can change the url with javascript post- load if it bothers you (as stackoverflow does). Cache loops are an entirely avoidable problem here.> the URL is now extremely long and since most users don't know if a given website does this weird "part of the URL is meaningless" thing there are tons of ways of manually sharing the URL that are now extremely laboriousExtremely long and extremely laborious seems a bit of an exaggeration. Most users copy and paste, no? Adding a few characters of a human readable tag doesn't warrant this response I feel. Especially when the benefit means that if I copy and paste a url into someplace, I can quickly error-check it to make sure it's the title I mean. When using the share button, the de-slugged URL can be given.> users who share the links will think "the person can read the URL, so I won't include more context" and the person receiving the links thinks "and the URL has the title, which I can trust more than what some random user adds".I guess? I wont bother with a rebuttal because this issue seems so minor. The benefit far outweighs some users maybe providing less context because the link url made them do it. If someone says "My typescript wont compile because of my constructor overloading or something please help", I can send stuff like:stackoverflow.com/questions/35998629/typescript- constructor-overload-with-empty- constructorstackoverflow.com/questions/26155054/how-can-i-do- constructor-overloading-in-a-derived-class-in-typescriptwhich I think is so much more useful than just IDs.> Many web browsers don't even show the URL anymore: the pretense that the URL should somehow be readable is increasingly difficult to defendMost do. Even still, the address bar is not the only place a URL is seen. Links in text all over the internet has URLs - particularly when shared in unformatted text (ie not anchor tags). And URLs should be readable to some extent. Would you suggest that all pages might as well be unique IDs? A URL like:https://developer.mozilla.org/en- US/docs/Web/JavaScript/Refe...Is much better thanhttps://developer.mozilla.org?articleId=10957348203758> how about doing it using a # on the page, so at least everyone had a chance to know that it is extraFair enough - I think that's a fine idea.
hyperpape - 3 hours ago I don't have a direct piece of evidence, but most users don't even know about ctrl-f, so I think they don't copy and paste. They click (or tap, these days) on links. https://www.theatla ntic.com/technology/archive/2011/08/crazy...Most users click links.
yathern - 2 hours ago I meant in the context of sharing links, either on a board like this or in a text. But that does bring a up a good point of how many users know how to copy/paste?Among all internet users, I would conservatively assume 30%+ do. Among people who have posted a link to social media or forums, I would assume %80+. But I'd be interested to see how off I am.
unkown-unknowns - 37 minutes ago One thing I find useful about subs in URL is it lets me see that I used the intended link when I paste it
musage - 37 minutes ago > 1) there are now an infinite number of URLs for every one of your pages that may end up separately stored on various services (mitigated for only some kinds of service if you redirect to correct)No need to redirect, that's what canonical links are for:https://developer.mozilla.org/en- US/docs/Web/HTML/Link_typesI don't disagree in that I mostly dislike URL slugs, too. Except for some hub pages ("photos", "blog", etc.), a numerical ID is more than enough. But the combination of ordering and display modes and filtering can still amount to a huge number of combinations, so canonical links are still needed - to have as many options for the user as possible and allow them all to be bookmarked, but also give search engines a hint on what minor permutations they can ignore safely.I wish search engines would completely ignore words in the URL. If it's not in the page (or the "metadata" of actual content on pages linking to it, and so on), screw the URL. If it is in the page (and the URL), you don't need the URL. As long as they are incentivized, we'll have fugly URL schemes.
[deleted]
vilmosi - 20 minutes ago >>> the pretense that the URL should somehow be readable is increasingly difficult to defendI think I have a defense for this. I consistently long press links on mobile to see the url before deciding whether to load the page or not. Just to see if I can be bothered.
baby - 12 minutes ago 1: so what? I use this for my blog (cryptologie.net) and this has never been a problem. Search engines handle that quite well.2: no. The URL is not wrong. Rather it won?t describe the content perfectly anymore. If this is an issue you can attribute a new ID to your page.3: that?s why you have url shorteners. But what?s wrong with a long url? And how does it complicates sharing it? To share you copy/paste the url. Nothing changed. And now the url describes the content! (That?s the reason we do it.)4: that?s a good thing!So yeah. I?ll keep doing this for my blog and I hope websites like SO keep doing that as well
simcop2387 - 3 hours ago The usual way I've seen to deal with this kind of ambiguity is by doing a 301 redirect so that bookmarks get changed and the url in the address bar is also changed. It doesn't fix external parties linking to the site with the now deprecated url but there was never anything you could reasonably do about that.> If you are going to insist upon doing this, how about doing it using a # on the page, so at least everyone had a chance to know that it is extra, random data that can be dropped from the URL without penalty and might not come from the website and so shouldn't be trusted?The fragment doesn't get indexed by search engines so not many will see it. Along with that, in my understanding, having something human readable in the URL helps with SEO in at least google an bing so doing this could hurt your search rankings which isn't a good thing.
othersideofcoin - 1 hours ago Minor nitpick, I'm not sure if exact match in URL slugs matters from Google's perspective very much. I do read that searchers' eyes can be drawn towards the exact match (which are frequently bolded in the SERPs), possibly leading to a higher clickthrough rate.
knome - 1 hours ago It's been a while since I was looking at how google's crawler worked. For items that had multiple ways of navigating there, I remember using the link rel="canonical" to let google know where the page would have been if not for the category information etc in the url.
hughes - 3 hours ago 1) and 2) are not a problem if the server accepts any value for the title token (which is the case on stack exchange)3) is not a problem for hyperlinks (url not visible) or for even direct links (not burdensome length), and if you care about a short url an even shorter form is available4) seems like a feature? the person sending the link will only ever include as much information as they deem necessary anyway. If the recipient wants more info they'll either request it or click the link.Trust is an interesting point, but if you can equally put literally anything in the client side anchor (eg. meta.stackexchange.com/questions/148454/#definitely-not-a-rick- roll) so I don't see what a viable alternative would be.
overcast - 4 hours ago This is the way I've always done it as well, and super easy to implement.For example.router.get('/article/:article_shortid*?',fu nction(req,res){ });catches /article/28424824/this-is-my-article, and also /article/28424824
loevborg - 4 hours ago Ha, I didn't realize that you could change the question title or even leave it out altogether without breaking the link. Neat!
Moter8 - 3 hours ago Strangely enough, discourse uses the following style:https://meta.discourse.org/t/deleted-topics-where-are- they/2.../t/ for topic, slug for readability, then a topic id and at last a reply id.
ehsankia - 3 hours ago So does reddit. Go to any comment section. You can remove the latter part with the title and only leave the identifier, and the link will still work. The short link actually only contains the identifier.
detaro - 3 hours ago and your comment is the perfect demonstration why: when truncated, the id gets cut off before the slug.
TeMPOraL - 3 hours ago Which is... something you don't want to happen, right?
detaro - 3 hours ago Right, I was thinking purely display truncation, like here. Surviving copy-paste from here or other actual truncation it's bad, true.
dom0 - 4 hours ago A bunch of news sites use similar URL parsing; they tend to not care about the "slug" either. I think this is, in the general case, the best way.
spiderfarmer - 4 hours ago As long as you provide a canonical URL.
duskwuff - 3 hours ago Or if you redirect any non-canonical URLs to the canonical one.
digikata - 3 hours ago This seems like it's vulnerable to some form of abuse.library.com/books/1as03jf08e/Moby- Dick/library.com/books/1as03jf08e/Hitchhikers-Guide-to-the- GalaxyNow lead to the same place...
madeofpalk - 1 hours ago eh. You can do that with query strings and hashes in URLS anyway. https://news.ycombinator.com/user?id=digikata&profile =bad-pe...
always_good - 2 hours ago You would redirect to the canonical one.
WorldMaker - 2 hours ago You don't necessarily have to redirect, but you should at least include `` (as given example StackOverflow does) so that search robots and other website (scrape and/or API) clients know which one is the canonical path, to avoid duplicate efforts.
always_good - 2 hours ago That only works for some crawlers. Certainly not for users. Meanwhile, everything obeys redirects.Since you bring up Stack Overflow, notice that they do the canonical redirect. Change the title in the URL and you'll get redirected.
awj - 2 hours ago I think the concern is in the way it obscures the target. Replace "Moby Dick" with a Chuck Tingle (warning, probably nsfw) book. Now that second link is a serious problem.
digikata - 1 hours ago I'm not even sure it's a serious problem - a possible annoyance, and perhaps, for a spammy site owner, maybe even a feature. But as a web user, I'm not really fond of that added uncertainty.
always_good - 2 hours ago I see what you're saying, but it doesn't seem like much more than a funny gag you might pull on a friend.If a website is concerned about that case, then instead of letting it inform their URL design, they should have a "Warning: Adult content. [Continue] [Back]" interstitial like Reddit or Steam.
CydeWeys - 52 minutes ago Goodreads does something similar, which I also appreciate. An example: https://www.goodreads.com/book/show/22733729-the-long- way-to...You can take off any of the words past the numeric ID and it still works just fine.
[deleted]
afandian - 4 hours ago Good advice. Interesting that Canonical URLs aren't mentioned.But the sheer arrogance of serving a webpage that doesn't render any text unless you execute their JavaScript really annoys me. It's not a fancy interactive web-app, it's a webpage with some text on it.
brogrammernot - 4 hours ago I understand the frustration but you also understand that the vast majority of individuals render JS on the page and do not use text only browsers.It?s not worth the time to appeal to such a minority share of internet users.
afandian - 2 hours ago Your argument holds for web apps where it might be extra work to do progressive enhancement. But this is literally a webpage of text. It is more work to get JS involved.Humans using off the shelf browsers aren't the only ones who consume webpages.
fixermark - 1 hours ago The notion of surfing the web without JavaScript enabled is increasingly antiquated. You can't even log into Google without JS enabled; it's necessary to mandate it because of iframe attacks.
afandian - 1 hours ago Not all web pages are (or at least need to be) web apps. Logging into an account vs reading a static page is apples to oranges.Mandating JS to get any content, no matter how static, seems like the start of the death of e.g. Linked Data and a the web as an open standards based platform. I know I'm in the minority but diversity is a strength, and there are few places more important than the web.
[deleted]
icebraining - 2 hours ago Loaded just fine with NoScript here.
bo1024 - 2 minutes ago I didn't get any text until I enabled JS (using uMatrix).
tejtm - 2 hours ago http://journals.plos.org/plosbiology/article?id=10.1371/jour...Abst ractIn many disciplines, data are highly decentralized across thousands of online databases (repositories, registries, and knowledgebases). Wringing value from such databases depends on the discipline of data science and on the humble bricks and mortar that make integration possible; identifiers are a core component of this integration infrastructure. Drawing on our experience and on work by other groups, we outline 10 lessons we have learned about the identifier qualities and best practices that facilitate large-scale data integration. Specifically, we propose actions that identifier practitioners (database providers) should take in the design, provision and reuse of identifiers. We also outline the important considerations for those referencing identifiers in various circumstances, including by authors and data generators. While the importance and relevance of each lesson will vary by context, there is a need for increased awareness about how to avoid and manage common identifier problems, especially those related to persistence and web-accessibility/resolvability. We focus strongly on web-based identifiers in the life sciences; however, the principles are broadly relevant to other disciplines.claimer: I am one of the many authors.
jgrodziski - 1 hours ago Identifying changing "stuff" in the real world is for me a fundamental topic of any serious data modeling for any kind of software (be it an API, a traditional database stuff, etc). Identity is also at the center of the entity concept of Domain- Driven Design (see the seminal book of Eric Evans on that: https://www.amazon.com/Domain-Driven-Design-Tackling-Complex...).I started changing my way of looking at identity by reading the rationale of clojure (https://clojure.org/about/state#_working_models_and_identity) -> "Identities are mental tools we use to superimpose continuity on a world which is constantly, functionally, creating new values of itself."The timeless book "Data and reality" is also priceless: https://www.amazon.com/Data-Reality-Perspective- Perceiving-I....More specifically concerning the article, I do agree with the point of view of the author distinguishing access by identifier and hierarchical compound name better represented as a search. On the id stuff, I find the amazon approach of using URN (in summary: a namespaced identifier) very appealing: http://philcalcado.com/2017/03/22/pattern_using_seudo-uris_w.... And of course, performance matters concerning IDs and UUID: https://tomharrisonjr.com/uuid-or-guid-as-primary-keys-be- ca....Happy data modeling :)EDIT: - add an excerpt from the clojure rationale
a13n - 4 hours ago For Canny, I wrote some awesome code that I'm proud of that turns a "post title" into a unique URL.https://react-native.canny.io /feature-requests/p/headless-js...For example, a post with title "post title" will get url "post-title".Then a second post with title "post title" will get url "post-title-1".Since there's only one URL part associated with each post, it's a unique identifier.This gets rid of the ugly id in the URL, for epic URL awesomeness.Furthermore, if you edit the first post to have "new post title" then its URL will update to "new-post-title", but "post-title" will still redirect to "new-post-title".Someday I'm gonna open source a lib that lets you easily add awesome URLs to your app. :)
always_good - 2 hours ago The annoying part is doing the database lookups to check for collisions / canonicalization, so what would your lib be generalizing?
a13n - 1 hours ago Yeah good point. Maybe it'd be better as a blog post with an associated repo in one implementation (Node + Mongo) and great documentation.
jlg23 - 3 hours ago > For Canny, I wrote some awesome code that I'm proud of that turns a "post title" into a unique URL.Did you mean "slug"? What you are describing is a basic feature of most blogging software since the inception of blogs...
a13n - 1 hours ago It's way more than that.? Automatically handling duplicates? Avoiding needing to include the unique ID in the URL? Updating the URL after editing the post? Redirecting previous versions to the new version
vilmosi - 16 minutes ago I think most blog platforms have that...
jey - 2 hours ago There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton https://martinfowler.com/bliki/TwoHardThings.html
andrewstuart2 - 1 hours ago Aw, you didn't quote the best one from that page."There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors."
kornish - 59 minutes ago Haha, didn't see that you beat me to it.It's worth including the third saying on the page just for completeness:"There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery"
kornish - 59 minutes ago "There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors."
amelius - 4 hours ago Why not make every URL that's shown in the title bar a permalink by default?That way, you have the best of both worlds in all cases.If another object tries to use the same URL as another object (which was used first), then a new URL must be generated (just add something at the end of the name).
wyndham - 4 hours ago The article's main insight: "URLs based on hierarchical names are actually the URLs of search results rather than the URLs of the entities in those search results".
mjevans - 1 hours ago In the most technical sense both are searches encoded in to a URI form. The search for the (hopefully) GUID just happens to be for a specific mechanical object, while the other is describing the taxonomic categorization of what a matching item would look like.
always_good - 1 hours ago Though their "/search?kind=book&title=moby-dick&shelf=american- literature" example is fundamentally different in that all filters (being URL query parameters) are optional and can be arbitrarily combined.I didn't quite understand the point of the hierarchal "search URL" when you have the /search one implemented, and they go on to say you could implement both if you have the time and energy.
bvrmn - 2 hours ago Many commenters here and author of OP talk about urls in browser address bar. However article has "API design" in title.
nayuki - 3 hours ago The article talks about referring to resources by using URLs containing opaque ID numbers versus URLs containing human-readable hierarchical paths and names. They give examples like bank accounts and library books.This problem about naming URLs is also present in file system design. File names can be short, meaningful, context- sensitive, and human-friendly; or they can be long, unique, and permanent. For example, a photo might be named IMG_1234.jpg or Mountain.jpg, or it can be named 63f8d706e07a308964e3399d9fbf8774d3 7493e787218ac055a572dfeed49bbe.jpg. The problem with the short names is that they can easily collide, and often change at the whim of the user. The article highlights the difference between the identity of an object (the permanent long name) versus searching for an object (the human-friendly path, which could return different results each time).For decades, the core assumption in file system design is to provide hierarchical paths that refer to mutable files. A number of alternative systems have sprouted which upend this assumption - by having all files be immutable, addressed by hash, and searchable through other mechanisms. Examples include Git version control, BitTorrent, IPFS, Camlistore, and my own unnamed proposal: https://www.nayuki.io/page/designing-a-better- nonhierarchica... . (Previous discussion: https://news.ycombinator.com/item?id=14537650 )Personally, I think immutable files present a fascinating opportunity for exploration, because they make it possible to create stable metadata. In a mutable hierarchical file system, metadata (such as photo tags or song titles) can be stored either within the file itself, or in a separate file that points to the main file. But "pointers" in the form of hard links or symlinks are brittle, hence storing metadata as a separate file is perilous. Moreover, the main file can be overwritten with completely different data, and the metadata can become out of date. By contrast, if the metadata points to the main data by hash, then the reference is unambiguous, and the metadata can never accidentally point to the "wrong" file in the future.
hire_charts - 1 hours ago A long time ago, around when I was first taking systems programming courses, I had this vision for a filesystem and file explorer that would do exactly what you say. I imagined an entire OS without any filepaths for user data (in the traditional, hierarchical sense). My opinion (both now and back then) was that tree structures as a personal data filing system almost always made more of a mess than it actually solved. Especially for non- techies.Rather, everything would automatically be ingested, collated, categorized, and (of course) searchable by a wide range of metadata. Much of it would be automatic, but it would also support hand-tagging files with custom metadata, like project or event names, and custom "categorizers" for more specialized file types.Depending on the types of files, you could imagine rich views on top -- like photos getting their own part of the system with time-series exploration tools, geolocation, and person- tagging with face recognition, or audio files being automatically surfaced in a media library, with heuristics used to classify by artist, genre, etc. But these views would be fundamentally separate from the underlying data, and any mutations would be stored as new versions on top of underlying, immutable files, making it easy to move things between views or upgrade the higher level software that depended on views.This was years ago, and I never got around to doing any of that (it would've been a massive project that likely would've fallen flat on its face). And now, in a roundabout kind of way, we've ended up with cloud-based systems that accomplish a lot of what I had imagined. I'd go so far as to say that local filesystems are quickly becoming obsolete for the average computer-user, especially those who are primarily on phones and tablets. It's a lot more distributed across 3rd party services than what I had in mind, but that at least makes it "safer" from being lost all at once (despite numerous privacy concerns).
lwansbrough - 4 hours ago Nice, this reflects the choice I've made with a recent API design. This is especially important for entity names you don't control.For example, we ingest gamertags and IDs from players of Xbox Live, PSN, Steam, Origin, Battle.net, etc. - each have their own requirements in terms of what is allowed in a username, and even whether or not they're unique. Often you can't ensure a user is unique by their gamertag alone. You can't even ensure uniqueness based on gamertag and platform name. Reality is that search is almost always required in these cases, and that's why we've implemented search in the way described in this article, with each result pointing to a GUID representing a gamer persona.
andrewstuart2 - 3 hours ago "The case for identifiers" is really more of a case for surrogate keys. Surrogate keys need not be opaque, but rather are distinguished by the fact that they're assigned by an authority and may be completely unrelated to the properties of an entity.Natural keys, meaning entity identification by some unique combination of properties, are hard to get right (oops, your email address isn't unique, or it's a mailing list) and a pain to translate into a name (`where x = x' and y = y' and z = z'`, or `/x/x'/y/y'/z/z'`, etc.).Surrogate keys, on the other hand, make it easy to identify one and only one object forever, but only so long as everybody uses the same key for the same thing.And as mentioned in the article, the most appropriate is usually both. Often you don't have the surrogate key, so you need to look up by the natural key, but when you do have the surrogate key, it's fastest and most likely to be correct if you use that in your naming scheme.